Fuck AI

2756 readers
1251 users here now

"We did it, Patrick! We made a technological breakthrough!"

A place for all those who loathe AI to discuss things, post articles, and ridicule the AI hype. Proud supporter of working people. And proud booer of SXSW 2024.

founded 1 year ago
MODERATORS
1
 
 

I want to apologize for changing the description without telling people first. After reading arguments about how AI has been so overhyped, I'm not that frightened by it. It's awful that it hallucinates, and that it just spews garbage onto YouTube and Facebook, but it won't completely upend society. I'll have articles abound on AI hype, because they're quite funny, and gives me a sense of ease knowing that, despite blatant lies being easy to tell, it's way harder to fake actual evidence.

I also want to factor in people who think that there's nothing anyone can do. I've come to realize that there might not be a way to attack OpenAI, MidJourney, or Stable Diffusion. These people, which I will call Doomers from an AIHWOS article, are perfectly welcome here. You can certainly come along and read the AI Hype Wall Of Shame, or the diminishing returns of Deep Learning. Maybe one can even become a Mod!

Boosters, or people who heavily use AI and see it as a source of good, ARE NOT ALLOWED HERE! I've seen Boosters dox, threaten, and harass artists over on Reddit and Twitter, and they constantly champion artists losing their jobs. They go against the very purpose of this community. If I hear a comment on here saying that AI is "making things good" or cheering on putting anyone out of a job, and the commenter does not retract their statement, said commenter will be permanently banned. FA&FO.

2
3
 
 

Alright, I just want to clarify that I've never modded a Lemmy community before. I just have the mantra of "if nobody's doing the right thing, do it yourself". I was also motivated by the decision from u/spez to let an unknown AI company use Reddit's imagery. If you know how to moderate well, please let me know. Also, feel free to discuss ways to attack AI development, and if you have evidence of AIBros being cruel and remorseless, make sure to save the evidence for people "on the fence". Remember, we don't know if AI is unstoppable. AI uses up loads of energy to be powered, and tons of circuitry. There may very well be an end to this cruelty, and it's up to us to begin that end.

4
 
 

Source (Bluesky)

5
6
 
 
7
8
 
 

... Make no mistake: AI is not just another technology. It is power, scaled. And in the hands of the far right, it becomes the most effective tool for dismantling democracy ever invented.

We’re not just fighting bad actors anymore: We’re fighting machines trained to think like them.

Authoritarians—whether MAGA-aligned in the United States or part of the global movement that includes Russian President Vladimir Putin, Hungarian Prime Minister Viktor Orbán, Indian Prime Minister Narendra Modi, Israeli Prime Minister Benjamin Netanyahu, and others—are not blind to the potential of AI. They understand it instinctively: its ability to simulate, to deceive, to surveil, and to dominate. While progressives and democratic institutions have scrambled to comprehend its implications, the authoritarians have already started weaponizing it with devastating efficiency.

Let’s look at the mechanisms.

AI can now generate millions of personalized political messages in seconds, each calibrated to manipulate a voter’s specific fears or biases. It can create entire fake news outlets, populate them with AI-generated journalists, and flood your social feed with content that looks real, sounds real, and feels familiar, all without a single human behind it. Imagine the power of Joseph Goebbels’ propaganda machine, but with superintelligence behind the wheel and zero friction. That’s where we’re heading.

And that’s just the beginning.

Authoritarian regimes can—and already are—using AI to surveil and intimidate their citizens. What China has perfected with facial recognition and loyalty scoring, MAGA-aligned figures in the U.S. are watching closely, eager to adopt and adapt. Right-wing sheriffs and local governments could soon use AI to track protestors, compile digital dossiers, and “predict” criminal behavior in communities deemed politically undesirable.

If the government knows not just where you are, but what you’re thinking, organizing, or reading—and it can fabricate “evidence” to match—freedom of thought becomes a quaint memory...

Imagine a future where police departments outsource their decision-making to “neutral” algorithms, algorithms coded with the biases of their creators like Elon Musk is doing by training Grok on Xitter. Where AI-driven systems deny permits, benefits, or even due process based on behavioral profiles. Where loyalty to the regime is rewarded with access, and dissent is flagged by invisible systems you can’t appeal.

That’s not democracy. That’s techno-feudalism, wrapped in a red-white-and-blue flag...

9
 
 

When I search for anything on Google or DuckDuckGo, more than half of the results are useless AI generated articles.

Those articles are generated to get in the first results of requests, since the search engine use algorithms to index websites and pages.

If we manually curate "good" websites (newspapers, forums, encyclopedias, anything that can be considered a good source) and only index their contents, would it be possible to create a good ol'fashioned search engine? Does it already exist?

10
 
 

...The results revealed that models such as OpenAI's GPT-4o and Antropic's Claude were "distinctly pacifist," according to CSIS fellow Yasir Atalan. They opted for the use of force in fewer than 17% of scenarios. But three other models evaluated — Meta's Llama, Alibaba Cloud's Qwen2, and Google's Gemini — were far more aggressive, favoring escalation over de-escalation much more frequently — up to 45% of the time.

What's more, the outputs varied according to the country. For an imaginary diplomat from the U.S., U.K. or France, for example, these AI systems tended to recommend more aggressive — or escalatory — policy, while suggesting de-escalation as the best advice for Russia or China. It shows that "you cannot just use off-the-shelf models," Atalan says. "You need to assess their patterns and align them with your institutional approach."

Russ Berkoff, a retired U.S. Army Special Forces officer and an AI strategist at Johns Hopkins University, sees that variability as a product of human influence. "The people who write the software — their biases come with it," he says. "One algorithm might escalate; another might de-escalate. That's not about the AI. That's about who built it."...

Reddie also recognizes some of the technology's limitations. As long as diplomacy follows a familiar narrative, all might go well, he says, but "if you truly think that your geopolitical challenge is a black swan, AI tools are not going to be useful to you."

Jensen also recognizes many of those concerns, but believes they can be overcome. His fears are more prosaic. Jensen sees two possible futures for the role of AI systems in the future of American foreign policy.

"In one version of the State Department's future … we've loaded diplomatic cables and trained [AI] on diplomatic tasks," and the AI spits out useful information that can be used to resolve pressing diplomatic problems.

The other version, though, "looks like something out of Idiocracy," he says, referring to the 2006 film about a dystopian, low-IQ future. "Everyone has a digital assistant, but it's as useless as [Microsoft's] Clippy."

11
 
 

Source (Imgur)

12
13
 
 

Also just some outstanding promotional material: https://www.youtube.com/watch?v=jnJOHT2f2_4 Alternate frontend

14
 
 

Key findings include the below.

Industry adoption of AI code generation models may pose risks to software supply chain security. However, these risks will not be evenly distributed across organizations. Larger, more well-resourced organizations will have an advantage over organizations that face cost and workforce constraints.

Multiple stakeholders have roles to play in helping to mitigate potential security risks related to AI-generated code. The burden of ensuring that AI-generated code outputs are secure should not rest solely on individual users, but also on AI developers, organizations producing code at scale, and those who can improve security at large, such as policymaking bodies or industry leaders. Existing guidance such as secure software development practices and the NIST Cybersecurity Framework remains essential to ensure that all code, regardless of authorship, is evaluated for security before it enters production. Other cybersecurity guidance, such as secure-by-design principles, can be expanded to include code generation models and other AI systems that impact software supply chain security.

Code generation models also need to be evaluated for security, but it is currently difficult to do so. Evaluation benchmarks for code generation models often focus on the models’ ability to produce functional code but do not assess their ability to generate secure code, which may incentivize a deprioritization of security over functionality during model training. There is inadequate transparency around models’ training data—or understanding of their internal workings—to explore questions such as whether better performing models produce more insecure code.

15
16
 
 

Source (Threads)

17
18
 
 

Republicans try to use the Budget Reconciliation bill to stop states from regulating AI entirely for 10 years.

19
 
 

"Like any product of human creativity, AI can be directed toward positive or negative ends," Francis said in January. "When used in ways that respect human dignity and promote the well-being of individuals and communities, it can contribute positively to the human vocation. Yet, as in all areas where humans are called to make decisions, the shadow of evil also looms here. Where human freedom allows for the possibility of choosing what is wrong, the moral evaluation of this technology will need to take into account how it is directed and used."

...

Just as mechanization disrupted traditional labor in the 1890s, artificial intelligence now potentially threatens employment patterns and human dignity in ways that Pope Leo XIV believes demand similar moral leadership from the church.

"In our own day," Leo XIV concluded in his formal address on Saturday, "the Church offers to everyone the treasury of her social teaching in response to another industrial revolution and to developments in the field of artificial intelligence that pose new challenges for the defense of human dignity, justice, and labor."

20
 
 

cross-posted from: https://pawb.social/post/24295950

Source (Bluesky)

21
 
 

Countries are meeting at the United Nations on Monday to revive efforts to regulate the kinds of AI-controlled autonomous weapons increasingly used in modern warfare, as experts warn time is running out to put guardrails on new lethal technology. Autonomous and artificial intelligence-assisted weapons systems are already playing a greater role in conflicts from Ukraine to Gaza. And rising defence spending worldwide promises to provide a further boost for burgeoning AI-assisted military technology.

Progress towards establishing global rules governing their development and use, however, has not kept pace. And internationally binding standards remain virtually non-existent. Since 2014, countries that are part of the Convention on Conventional Weapons (CCW) have been meeting in Geneva to discuss a potential ban fully autonomous systems that operate without meaningful human control and regulate others. U.N. Secretary-General Antonio Guterres has set a 2026 deadline for states to establish clear rules on AI weapon use. But human rights groups warn that consensus among governments is lacking. Alexander Kmentt, head of arms control at Austria's foreign ministry, said that must quickly change.

"Time is really running out to put in some guardrails so that the nightmare scenarios that some of the most noted experts are warning of don't come to pass," he told Reuters. Monday's gathering of the U.N. General Assembly in New York will be the body's first meeting dedicated to autonomous weapons. Though not legally binding, diplomatic officials want the consultations to ramp up pressure on military powers that are resisting regulation due to concerns the rules could dull the technology's battlefield advantages. Campaign groups hope the meeting, which will also address critical issues not covered by the CCW, including ethical and human rights concerns and the use of autonomous weapons by non-state actors, will push states to agree on a legal instrument. They view it as a crucial litmus test on whether countries are able to bridge divisions ahead of the next round of CCW talks in September.

https://archive.is/8dzXb

22
 
 
23
24
 
 

cross-posted from: https://lemmy.ml/post/30013197

Significance

As AI tools become increasingly prevalent in workplaces, understanding the social dynamics of AI adoption is crucial. Through four experiments with over 4,400 participants, we reveal a social penalty for AI use: Individuals who use AI tools face negative judgments about their competence and motivation from others. These judgments manifest as both anticipated and actual social penalties, creating a paradox where productivity-enhancing AI tools can simultaneously improve performance and damage one’s professional reputation. Our findings identify a potential barrier to AI adoption and highlight how social perceptions may reduce the acceptance of helpful technologies in the workplace.

Abstract

Despite the rapid proliferation of AI tools, we know little about how people who use them are perceived by others. Drawing on theories of attribution and impression management, we propose that people believe they will be evaluated negatively by others for using AI tools and that this belief is justified. We examine these predictions in four preregistered experiments (N = 4,439) and find that people who use AI at work anticipate and receive negative evaluations regarding their competence and motivation. Further, we find evidence that these social evaluations affect assessments of job candidates. Our findings reveal a dilemma for people considering adopting AI tools: Although AI can enhance productivity, its use carries social costs.

25
 
 

"When a model is deployed for purposes such as analysis or research — the types of uses that are critical to international competitiveness — the outputs are unlikely to substitute for expressive works used in training," the office said. "But making commercial use of vast troves of copyrighted works to produce expressive content that competes with them in existing markets, especially where this is accomplished through illegal access, goes beyond established fair use boundaries."...

"Unlike cases where copying computer programs to access their functional elements was necessary to create new, interoperable works, using images or sound recordings to train a model that generates similar expressive outputs does not merely remove a technical barrier to productive competition," the office said. "In such cases, unless the original work itself is being targeted for comment or parody, it is hard to see the use as transformative."

A day after the office released the report, President Donald Trump fired its director, Shira Perlmutter, a spokesperson told Business Insider.

view more: next ›