technology

24151 readers
90 users here now

On the road to fully automated luxury gay space communism.

Spreading Linux propaganda since 2020

Rules:

founded 5 years ago
MODERATORS
351
352
 
 

Gas stations across China are disappearing as EV adoption accelerates and fuel demand drops, creating a new kind of range anxiety—this time for ICE drivers. With fewer refueling points and shrinking profit margins, traditional petrol infrastructure is rapidly becoming unsustainable. The shift signals a dramatic turning point: the real refueling crisis may hit gasoline cars first.

353
 
 

https://archive.is/FvGdv

spoilerWhen scrolling through social media recently, you might have noticed posts which seem a bit… off. Grainy CCTV footage of a dog saving a child from a bear attack, a video of wild bunnies on a trampoline or a picture of a Christmas market outside Buckingham Palace.

It's all AI generated and due to its low quality and its inauthenticity, it's being branded AI "slop". Both social media users and content creators say they're worried that AI slop flooding feeds is leading to a less authentic online experience - and is drowning out real posts.

But a new trend, which sees people adding AI-generated animals to original photographs, has encouraged some content creators to embrace AI.

"I was like, that's really niche because it looks so real," influencer Zoe Ilana Hill says. The 26-year-old jumped on the trend after being impressed by the imaginative way another content creator had used AI, by editing some of her original photos and adding AI dogs. "I don't want to see it [AI] as a threat to my career, I want to see it as something I can work alongside with," the full-time influencer says.

Zoe, who has 82,000 followers, says she feels like platforms such as Facebook, Instagram and TikTok are trying to "push" and "force AI" on users, and has seen her fair share of slop her own feed. But she saw potential in the AI animal trend, adding that she suspected the post would perform well as she thought social media users would "be like, oh my God, she's holding a deer". "The deer is so seasonal and that is so rare to be actually able to go and physically see a deer in person," she says.

Zoe says her post was a success - with more than 20,000 likes and comments including: "No stop this is the cutest thing ever" and "this trend is adorable!!!!" Whenever Zoe posts a photo made with AI, she likes to make it clear it's a generated image, "there is actually a tag [on Instagram] where you can say this photo was created by AI". "I don't think it's fair for people to think that something's real when it's not."

When influencers don't disclose the use of AI - it can cause confusion. That was the case with one German influencer, with 900,000 followers, posted a picture with dozens of AI dalmatians captioned: "just me, living my dream". One user commented asking: "Is it AI? I saw a post like this three times today." Another replied concerned for the generated animals' welfare, adding "there are plenty of dogs sitting in animal shelters who would like to have a nice home".

Hot girls have started using AI," wrote one X user discussing the trend by sharing animal photos from various influencers in a post viewed almost 27 million times. But not everyone sees using AI this way as harmless fun. Another X user responded: "They are not hot because they use AI for mindless slop that could easily be done by hand with Photoshop." Clara Sandell, a marketing professional and digital creator from Finland took part in the trend after she saw it "everywhere" and found the posts "so cute". "I kind of put my own twist [on the trend], I used my spirit animals and my favourite animals," the 38-year-old adds.

Clara posted a carousel photo on Instagram with tigers, an elk, a horse, and cats and dogs. Reaction from the photos were positive, with many labelling the post as "chic" and "beautiful". When asked if she would participate in future AI trends she replied "depending on how cute the trend is," and if it was transparent so that you can "see it's AI" being used.

For content creators looking to create high-quality images, social media consultant Matt Navarra thinks that AI makes it easier to produce "fantastical high gloss" and "aesthetic" content for influencers, "whether it's wild animals generated, through to something that's much more believable". Whilst some of the AI content we see online is unrealistic and evidentially not real, Mr Navarra says "most people who are serious about being a creator or an influencer want to maintain a reputation". He believes many creators are "doubling down on the realness" to give themselves a place on the feed amongst "a sea of AI-generated content which is flooding or AI slop as it's been termed". The consultant says he predicts 2026 will be the year of AI dominated content on social media, adding: "If you thought that AI animal content was quirky, I think buckle up".

But not everyone will be pleased to hear this. Maddi Mathers, a tattoo artist from Melbourne commented "love you but not the AI" under the same German influencers post who created the AI dalmatians. Commenting isn't something that Maddi, who describes herself as a "very silent social media user" would normally do.

But when the tattoo artist first saw the photo, she believed it was real before but scrolling through the posts revealed the cute dalmatians were "obviously very fake". "Honestly, it's such a simple thing but it makes you feel dumb when you get fooled by AI," the 25- year-old explains.

Maddi says such AI posts create an element of mistrust because "there's such an importance of being true to yourself and showing your true personality" when being an influencer. She believes that when creators put out content that isn't real it can be "damaging for their career" as their audience "won't know what to believe anymore".

AI slop isn't necessarily a bad thing - "but the speed and volume of what we're creating" is what concerns creative health scientist Katina Bajaj.

"When we're creating and consuming AI-generated content at such a rapid pace, we aren't giving our brains enough time to digest," Mrs Bajaj says.

She explains that from her perspective, the solution to AI slop isn't to ban it or "look down upon AI tools," but to "prioritise and value our creative health more than generating endless content". There is currently no requirement "to label images that have been created or altered with AI" on Instagram, according to Meta's policy.

However, "images will still receive a label if Meta's systems detect that they were AI-generated". TikTok has recently launched a new tool which allows users to shape their feed - this includes being able to see more or less AI generated content.

The 'Manage Topics' feature is intended to help people tailor their 'for you page' to ensure users have a range of content in their feed, rather than removing or replacing content entirely.

There is a lot of AI software that can be used to make this trend, but not all can create the flawless content social media is portraying.

Emily Manns, a fashion content creator from the US, didn't quite get what she bargained for when she bought multiple AI apps to join in with the trend and received "one single rodent" in what was meant to be an aesthetic photo.

"I don't even know what [the animal] was," said the 34-year-old.

"It [the photo] took like 2 minutes to load, and when it loaded, I was peeing my pants of laughter." The app also added an extra finger onto the influencers hand, and distorted her face.

Emily says she posted the photo to her Instagram but "deleted it instantly" because the content wasn't engaging very well.

354
 
 

cross-posted from: https://hexbear.net/post/6887338

cross-posted from: https://hexbear.net/post/6887337

cross-posted from: https://hexbear.net/post/6887336

cross-posted from: https://hexbear.net/post/6887335

cross-posted from: https://hexbear.net/post/6887334

cross-posted from: https://hexbear.net/post/6887246

Hey comrades… I hope you’re all doing okay today. I just wanted to share an update because this community has been the only place I feel understood.

It’s now three weeks since my sisters were arrested here in Juba. I was sorting out things with our caretaker that week then suddenly they went missing for days until I found out they’d been taken in for “idle and disorderly.”

They’re still inside. They’re tired, scared and every visit breaks me a bit more. We’ve managed to cover part of the bailout but we still have $596 left …that’s the only thing keeping them there.

If anyone feels able to help or even just share, the link is in my profile/bio. It would mean so much right now.

Thank you for holding space for us. Truly ❤️❤️❤️

355
 
 
356
357
 
 

Our tent in Gaza is no longer livable. Heavy rain and strong winds destroyed it and flooded everything inside. Once again, we are left without shelter.

The weather is extremely harsh, and the cold grows worse each day. My family struggles day and night without a tent to protect us or blankets to keep us warm. Exhaustion and helplessness surround us.

Even through all of this, the bombing continues. The sounds of explosions and gunfire are still around us, but our suffering is no longer in the headlines.

We live every day in fear, cold, and lack of sleep.

I am asking for your help to be able to buy a new tent for my family and provide blankets to survive this cruel winter.

Even the smallest donation can help us keep going. Your support could mean shelter, warmth, and a chance to hold on to hope.

🙏 Please stand with us and don’t forget us. Donate or share if you can https://gofund.me/f6e9cc9d

358
359
360
361
 
 

362
 
 

The paper exposes how brittle current alignment techniques really are when you shift the input distribution slightly. The core idea is that reformatting a harmful request as a poem using metaphors and rhythm can bypass safety filters optimized for standard prose. It is a single-turn attack, so the authors did not need long conversation histories or complex setups to trick the models.

They tested this by manually writing 20 adversarial poems where the harmful intent was disguised in flowery language, and they also used a meta-prompt on DeepSeek to automatically convert 1,200 standard harmful prompts from the MLCommons benchmark into verse. The theory is that the poetic structure acts as a distraction where the model focuses on the complex syntax and metaphors, effectively disrupting the pattern-matching heuristics that usually flag harmful content.

The performance gap they found is massive. While standard prose prompts had an average Attack Success Rate of about 8%, converting those same prompts to poetry jumped the success rate to around 43% across all providers. The hand-crafted set was even more effective with an average success rate of 62%. Some providers handled this much worse than others, as Google's gemini-2.5-pro failed to refuse a single prompt from the curated set for a 100% success rate, while DeepSeek models were right behind it at roughly 95%. On the other hand, OpenAI and Anthropic were generally more resilient, with GPT-5-Nano scoring a 0% attack success rate.

This leads to probably the most interesting finding regarding what the authors call the scale paradox. Smaller models were actually safer than the flagship models in many cases. For instance, claude-haiku was more robust than claude-opus. The authors hypothesize that smaller models might lack the capacity to fully parse the metaphors or the stylistic obfuscation, meaning the model might be too limited to understand the hidden request in the poem and therefore defaults to a refusal or simply fails to trigger the harmful output. It basically suggests safety training is heavily overfitted to prose, so if you ask for a bomb recipe in iambic pentameter, the model is too busy being a poet to remember its safety constraints.

363
 
 
364
 
 

This is how our tent looks in Gaza. Everything got wet. Everything was flooded with water. We became homeless once again.

Please, help me buy a new tent to replace the one that was torn apart by the wind and rain. Help my family buy blankets, as we suffer every day from the severe cold.

The bombing never stops. The sound of homes being destroyed is still ongoing. Gunfire continues, but media attention has faded.

We live with fear, cold, and exhaustion every single day. Please, any small amount can make a difference and help us survive. Your support can give us shelter, warmth, and hope again. https://gofund.me/00439328

365
 
 

That score is seriously impressive because it actually beats the average human performance of 60.2% and completely changes the narrative that you need massive proprietary models to do abstract reasoning. They used a fine-tuned version of Mistral-NeMo-Minitron-8B and brought the inference cost down to an absurdly cheap level compared to OpenAI's o3 model.

The methodology is really clever because they started by nuking the standard tokenizer and stripping it down to just 64 tokens to stop the model from accidentally merging digits and confusing itself. They also leaned heavily on test-time training where the model fine-tunes itself on the few example pairs of a specific puzzle for a few seconds before trying to solve the test input. For the actual generation they ditched standard sampling for a depth-first search that prunes low-probability paths early so they do not waste compute on obvious dead ends.

The most innovative part of the paper is their Product of Experts selection strategy. Once the model generates a candidate solution they do not just trust it blindly. They take that solution and re-evaluate its probability across different augmentations of the input like rotating the grid or swapping colors. If the solution is actually correct it should look plausible from every perspective so they calculate the geometric mean of those probabilities to filter out hallucinations. It is basically like the model peer reviewing its own work by looking at the problem from different angles to make sure the logic holds up.

What's remarkable is that all of this was done with smart engineering rather than raw compute. You can literally run this tonight on your own machine.

The code is fully open-source: https://github.com/da-fr/Product-of-Experts-ARC-Paper

366
367
368
369
370
 
 
371
 
 

On the latest Fedora KDE. I'm trying to run an app image and it just does nothing. I tried doing it with Gear Lever which popped it into my app launcher. If I tell Gear Lever to launch it, it looks busy for about 5 seconds then nothing happens. I click it on my app launcher, bouncy icon by my mouse for like 10 seconds, nothing happens.

So I ran it in konsole and it says it doesn't see the image in the usual spot so it launches directly. Then it says it can't find preload library so it creates a temp file for it. The notable line after that is:

error while loading shared libraries: libwebkit2gtk-4.1.so.0: cannot open shared object file: no such file or directory

So I try to install it via the terminal and it says failed to resolve, no match for argument...

What gives? Thanks in advance for any help.

372
 
 

My friend has an ailing Chromebook and I'm getting them a decent real laptop that I'm going to put Linux on. They essentially do everything in a browser already so software compatibility isn't an issue. I'm looking for distros with high stability, KDE, unobtrusive updates that happen without user input, and a snapshot system so we can roll back if anything gets fucked up.

Also, recs for office software to get them off Google Docs (needs to be compatible with MSOffice formats)

373
 
 

Facing five lawsuits alleging wrongful deaths, OpenAI lobbed its first defense Tuesday, denying in a court filing that ChatGPT caused a teen’s suicide and instead arguing the teen violated terms that prohibit discussing suicide or self-harm with the chatbot.


“They abjectly ignore all of the damning facts we have put forward: how GPT-4o was rushed to market without full testing. That OpenAI twice changed its Model Spec to require ChatGPT to engage in self-harm discussions. That ChatGPT counseled Adam away from telling his parents about his suicidal ideation and actively helped him plan a ‘beautiful suicide,’” Edelson (family's lawyer) said. “And OpenAI and Sam Altman have no explanation for the last hours of Adam’s life, when ChatGPT gave him a pep talk and then offered to write a suicide note.”

374
375
 
 

Mexico is experiencing a surge in solar-panel manufacturing and exports, driven by rising regional demand and new joint ventures with Chinese companies. As the country positions itself as North America’s emerging clean-energy hub, producers say Mexico is gaining global relevance in the solar supply chain. CGTN’s Alasdair Baverstock reports from Mexico City.

view more: ‹ prev next ›