technology

24266 readers
282 users here now

On the road to fully automated luxury gay space communism.

Spreading Linux propaganda since 2020

Rules:

founded 5 years ago
MODERATORS
1
16
Hexbear Code-Op (hexbear.net)
submitted 11 months ago* (last edited 11 months ago) by RedWizard@hexbear.net to c/technology@hexbear.net
 
 

Where to find the Code-Op

Wow, thanks for the stickies! Love all the activity in this thread. I love our coding comrades!


Hey fellow Hexbearions! I have no idea what I'm doing! However, born out of the conversations in the comments of this little thing I posted the other day, I have created an org on GitHub that I think we can use to share, highlight, and collaborate on code and projects from comrades here and abroad.

  • I know we have several bots that float around this instance, and I've always wondered who maintains them and where their code is hosted. It would be cool to keep a fork of those bots in this org, for example.
  • I've already added a fork of @WhyEssEff@hexbear.net's Emoji repo as another example.
  • The projects don't need to be Hexbear or Lemmy related, either. I've moved my aPC-Json repo into the org just as an example, and intend to use the code written by @invalidusernamelol@hexbear.net to play around with adding ICS files to the repo.
  • We have numerous comrades looking at mainlining some flavor of Linux and bailing on windows, maybe we could create some collaborative documentation that helps onboard the Linux-curious.
  • I've been thinking a lot recently about leftist communication online and building community spaces, which will ultimately intersect with self-hosting. Documenting various tools and providing Docker Compose files to easily get people off and running could be useful.

I don't know a lot about GitHub Orgs, so I should get on that, I guess. That said, I'm open to all suggestions and input on how best to use this space I've created.

Also, I made (what I think is) a neat emblem for the whole thing:

Todos

  • Mirror repos to both GitHub and Codeberg
  • Create process for adding new repos to the mirror process
  • Create a more detailed profile README on GitHub.

Done

spoiler

  • ~~Recover from whatever this sickness is the dang kids gave me from daycare.~~
2
 
 

Burger King is testing AI-powered headsets that can recite recipes, alert managers when inventories are low and even track how friendly employees are to customers.

Restaurant Brands International — the Miami-based company that owns Burger King, Popeyes and other brands — said Thursday its currently testing the OpenAI-powered headsets in 500 U.S. restaurants.

The system collects data on restaurant operations and shares it via “Patty,” a voice that talks to employees through their headsets. If the drink machine is low on Diet Coke, Patty will tell the store’s manager. If a customer uses a QR code to report a messy bathroom, the manager will be alerted.

Employees can ask Patty how to make various menu items or tell the AI bot to remove items from digital menus if they’ve run out of ingredients.

Burger King said it’s also exploring using the technology as a way to improve customer service. The system can track when employees say key words like “welcome,” “please” and “thank you” and share that with managers.

When asked about that capability Thursday by The Associated Press, Burger King said the intent is to use Patty as a coaching tool, not a tracker of individual employees.

“It’s not about scoring individuals or enforcing scripts. It’s about reinforcing great hospitality and giving managers helpful, real-time insights so they can recognize their teams more effectively,” the company said in a statement.

Burger King added that the key words are “one of many signals to help managers understand service patterns.”

“We believe hospitality is fundamentally human. The role of this technology is to support our teams so they can stay present with guests,” it said.

Patty is part of a larger app-based BK Assistant platform that will be available to all U.S. restaurants later this year.

Burger King is one of several fast food chains experimenting with artificial intelligence. Yum Brands said last spring it was partnering with Nvidia to develop AI technologies for its brands, which include KFC, Taco Bell and Pizza Hut.

McDonald’s ended a partnership with IBM in 2024 that was testing automated orders at its drive-thrus. The company is now working with Google on AI systems.

3
4
 
 

Assembly Bill No. 1043 was approved by California governor Gavin Newsom in October of last year, and becomes active on January 1, 2027 (via The Lunduke Journal). The bill states, among other factors, that "An operating system provider shall do all of the following:"

"(1) Provide an accessible interface at account setup that requires an account holder to indicate the birth date, age, or both, of the user of that device for the purpose of providing a signal regarding the user’s age bracket to applications available in a covered application store.

"(2) Provide a developer who has requested a signal with respect to a particular user with a digital signal via a reasonably consistent real-time application programming interface that identifies, at a minimum, which of the following categories pertains to the user."

xi-plz

5
6
7
8
 
 

cross-posted from: https://news.abolish.capital/post/31311

Drop Site is a reader-funded, independent news outlet. Without your support, we can’t operate. Please consider becoming a paid subscriber or making a 501(c)(3) tax-deductible donation today.

Subscribe now

U.S. Army personnel monitor screens displaying maps and imagery of the Gaza Strip during a media tour inside the Civil-Military Coordination Center (CMCC) on November 20, 2025 in Kiryat Gat, Israel. The screen (above left) shows the use of Palantir’s Gaia application, billed as a tool to “bring the battlefield into view.” Photo by Amir Levy/Getty Images.

Palantir Technologies has a permanent desk at the U.S.-led Civil Military Coordination Center (CMCC) headquarters in southern Israel, three sources from the diplomatic community inside the CMCC told Drop Site News. According to the sources, the artificial intelligence data analytics giant is providing the technological architecture for tracking the delivery and distribution of aid to Gaza.

The presence of Palantir and other corporations—along with recent changes banning non-profits unwilling to give data to Israeli authorities—is creating a situation in which the delivery of aid is taking a backseat to the pursuit of profit, investment, and the training of AI products, experts say.

“The United Nations already has a humanitarian architecture in place to step in during crises, abiding by humanitarian principles and grounded in international law,” UN Special Rapporteur for the occupied Palestinian territory FrancescaAlbanese told Drop Site. “This profit-driven parallel system involving companies like Palantir, already linked to Israel’s unlawful conduct, can only be regarded as a monstrosity.”

The CMCC was established by U.S. Central Command (CENTCOM) in October, one week after the so-called ceasefire went into effect in Gaza to “monitor implementation of the ceasefire” and “help facilitate the flow of humanitarian, logistical, and security assistance from international counterparts into Gaza.” Last week, at the inaugural summit of the Board of Peace in Washington, D.C., Major General Jasper Jeffers—who was tapped in January to lead the International Stabilization Force in Gaza—announced that the CMCC will serve as the Board of Peace’s operational headquarters.

According to the sources, a representative from Palantir sits in the CMCC operations room where aid convoys and distributions inside Gaza are monitored through drone surveillance. The representative integrates convoy and distribution-related data into Palantir’s systems, the sources said.

Palantir did not respond to an inquiry from Drop Site on its role in the CMCC or in aid distribution in Gaza. Founded in 2003 by billionaire Peter Thiel with the help of investments from the CIA’s venture capital arm In-Q-Tel, Palantir is known for its work with government agencies, including the U.S. military and Immigration and Customs Enforcement (ICE).

In January 2024, three months into Israel’s war on Gaza, Palantir announced it had entered into a “strategic partnership” with the Israeli military for “war-related missions.” The company’s board meeting that month in Tel Aviv was held “in solidarity” with Israel, Bloomberg reported. Palantir did not disclose what technologies would be provided to Israel but a year earlier the company introduced its Artificial Intelligence Platform (AIP) to help militaries rapidly analyze and identify bombing targets. The company’s technology has been described by a Palantir executive as a way of “optimizing the kill chain.” Palantir’s software has also been used by the Israeli military in several raids in Gaza, according to a biography of its CEO, “The Philosopher in the Valley: Alex Karp, Palantir and the Rise of the Surveillance State” by Michael Steinberger.

In a June 2025 report to the UN Human Rights Council, Albanese found “reasonable grounds to believe Palantir has provided automatic predictive policing technology, core defence infrastructure for rapid and scaled-up construction and deployment of military software, and its Artificial Intelligence Platform, which allows real-time battlefield data integration for automated decision-making.”

The use of Palantir to track aid deliveries to Gaza is of particular concern to observers. “The distinction between death by drone and delivery of aid is being evaporated while we all sit around the same table,” a source from the diplomatic community who attends CMCC sessions told Drop Site.

Palantir’s two main platforms are Gotham and Foundry. “Gotham’s targeting offering supports soldiers with an Al-powered kill chain, seamlessly and responsibly integrating target identification and target effector pairing,” according to the company’s website. Foundry is Palantir’s platform for supply chain management and is billed as a way to “bridge siloed planning and execution processes, optimize inventory management, and help build supply chain resilience for economic and geopolitical uncertainty.”

Palantir does not operate its systems in isolated silos. According to the company’s own documentation, “Palantir AIP and Foundry are designed to interoperate with the full range of data, logic, AI, workflow, and security systems.” A feature called “Type Mapping” allows data entered into the civilian Foundry system to be instantly synchronized and queried by the military’s Gotham platform.

This means that, in theory, information that is being gathered at the CMCC—including from participating governments the UN and NGOs regarding the type of aid being distributed, its distribution locations and systems, and truck convoy routes—could be seamlessly pulled into Gotham’s AI targeting matrix. The same software logic used to track aid at the CMCC could be used to optimize and accelerate lethal airstrikes.

There is no information available as to whether Gotham and Foundry are the specific products being used to track aid, but publicly available photographic evidence indicates that Palantir’s Gaia—a platform referred to on their website as a tool to “bring the battlefield into view”—is being deployed at the CMCC.

In an interview with Drop Site on the role of Palantir in Gaza, the economist Yanis Varoufakis, the former Greek Finance Minister and a former member of Greece’s parliament, described an encounter he had with a Palantir representative who had explained to him the benefit the company gained from Gaza. “He was saying that ‘as the bombs fell we were having a party,’” Varoufakis said. According to Varoufakis, the Palantir representative explained how the chaos of intense violence in a high-density urban area like Gaza generates substantial data for training their AI models on how humans respond under stress. “The more bombardment and havoc, the better the training,” Varoufakis said.

“It’s one thing to say that companies like Lockheed Martin make money selling F35s to the Israelis,” he said. “That has been a time-honored way that the military industrial complex has benefited from war and genocide and war crimes.” He continued, “This is the first instance in history where it is the suffering of a people being subjected to genocide and bombing—the suffering itself—which is adding to the capital of a company which then uses that capital to produce commodities to sell elsewhere.”

Palantir operates on Oracle’s cloud infrastructure, led by Larry Ellison—a major donor to the Israeli military who also funds the Tony Blair Institute, which has itself consulted on governance mechanisms for Gaza.

Drop Site News is reader-supported. Consider becoming a free or paid subscriber.

The growing use of Palantir and other private sector companies in Gaza comes as the non-profit sector is being systematically squeezed out. As of March 1, 2026, Israel will ban dozens of aid groups from operating in Gaza, as well as the West Bank and East Jerusalem, under new registration rules, including prominent NGOs such as Doctors Without Borders, the Norwegian Refugee Council, Oxfam, and Medical Aid for Palestinians.

The new measures require aid groups to register the names and contact information of employees and to provide details about their funding and operations to Israeli authorities. The aid groups said in a joint statement this week that “the demand to transfer personal data raises acute security and legal risks. It exposes national staff to potential retaliation and undermines established data protection and confidentiality safeguards.”

“NGOs are being pushed out of Gaza because aid delivered by humanitarian organizations is based on need and is provided to people wherever they are located,” said an aid worker who spoke to Drop Site on condition of anonymity. “This doesn’t match the vision of the ‘New Gaza’ where Palestinians will need to be displaced again into the zones where reconstruction will be permitted and their access to aid will be controlled through screening.”

Not all NGOs are being pushed out of Gaza. Others—on a list of registered and approved organizations—are expanding their role alongside the private sector. These include Christian groups like Samaritan’s Purse and GAiN, both of whom were involved in the Gaza Humanitarian Foundation (GHF), and who sources from the diplomatic community have recently seen gathering in a “prayer circle” at the CMCC.

These approved NGOs, alongside private firms coordinated through the CMCC like Palantir, stand ready to take over the distribution of aid in Gaza.

“Aid in Gaza has been stripped to bare survival and its future delivery appears to be faith based, profit driven, militarized and certainly not to be delivered by anyone that dares to speak out about what Palestinians are being subjected to,” said a senior aid worker who spoke to Drop Site on condition of anonymity.

Gaza’s experience with private delivery mechanisms has been catastrophic. In May 2025, the U.S. and Israeli-backed GHF was contracted to distribute aid in the enclave. During the four and half months the GHF operated in Gaza, more than 2,600 Palestinians seeking food were killed and over 19,000 wounded by Israeli forces or security contractors at or near aid distribution sites.

The former headquarters of the GHF, a large warehouse-style building in Kiryat Gat, is now the headquarters of the CMCC.

As aid groups are being banned, U.S. military contractors are also filling the vacuum. According to sources from the diplomatic community who attend the CMCC, the physical presence of Safe Reach Solutions (SRS), a U.S. military contractor that provided security for the GHF, has recently expanded at the center of the facility, with SRS officials taking up more prominent seating space on the operations floor. The company’s representatives now sit behind name tags in seating that had previously been reserved for UN agencies, the sources said.

SRS did not respond to an inquiry from Drop Site about its role at the CMCC or in Gaza aid distribution.

“Given the precedent of the GHF, which turned aid delivery into a killing machine,” Albanese told Drop Site, “and the grave violations of international law embedded in the so-called peace plan—first and foremost the negation of the Palestinian right to self-determination—the risk that companies and states involved in the CMCC may be complicit in, or even direct perpetrators of, international crimes is real.”

Arkel International, a longtime U.S. military contractor, has also had representatives at CMCC briefing sessions, according to the sources inside the center. Arkel recruited drivers from Serbia and Georgia to drive supplies into Gaza for the GHF in 2025, according to Haaretz. At the time, Arkel was represented in Israel by businessman Hezi Bezalel, who has served as honorary consul of Rwanda in Israel.

The re-emergence of GHF-linked companies, alongside the digital capacity to monitor and monetize the surviving population in Gaza, is now converging with the construction of a new physical infrastructure spearheaded by giant real estate conglomerates.

At last week’s Board of Peace meeting, the reconstruction of Gaza was positioned as a massive financial “unlocking” of a distressed asset. Figures like Yakir Gabay, who built a real estate empire in Germany, envisioned the coastline transformed into a “Mediterranean Riviera” featuring 200 hotels and artificial islands. Marc Rowan, a billionaire investor and the CEO of Apollo Global Management, framed the project as the consolidation of Gaza’s “productive assets” into a “unified structure.”

A significant addition to the CMCC’s corporate roster is Terra Firma Capital Partners, which sources confirmed now maintains a permanent presence at the CMCC. Founded by British financier Guy Hands, the firm brings experience in managing massive-scale residential assets. Terra Firma has links to the New Labour era, specifically through Lord John Birt, Tony Blair’s former strategy director, who worked for the company after he left government.

“The genocide is entering a new phase. After the destruction of Gaza and the erasure of entire family lines, powerful states are now deciding the fate of the survivors without ever listening to their voices,” Albanese said.

“If Gaza is not to become a capitalist techno-dystopia, the time to act is now. States and corporations supporting this emerging infrastructure must be stopped, and held accountable. There is no time to lose.”

Leave a comment

Share


From Drop Site News via This RSS Feed.

9
 
 

spoilerSilicon-based lenses may be the latest front in the privacy wars. As companies race to build smarter eyewear capable of facial recognition and real-time AI analysis, one independent developer has built something far simpler – an app designed to spot when those devices are nearby.

The Android app, Nearby Glasses, comes from Swiss sociologist and hobbyist coder Yves Jeanrenaud. It scans for Bluetooth Low Energy (BLE) activity associated with manufacturers such as Meta, Luxottica Group, and Snap – companies behind the most recognizable smart glasses on the market – and issues an alert if it detects one of their devices nearby.

Jeanrenaud describes the tool as a "tiny part of resistance against surveillance tech." The concept is plain: turn the same short-range connectivity that powers most wearables into a warning signal. When activated, the app listens for Bluetooth "advertising frames," the packets of metadata every low-energy device emits to identify itself and interact with nearby hardware.

If it detects frames registered to Meta or its partner Luxottica, the user receives a push notification reading, "Smart Glasses are probably nearby."

BLE identifiers are publicly assigned by the Bluetooth Special Interest Group and cataloged in directories available to developers. By referencing those databases, Jeanrenaud configured Nearby Glasses to recognize common manufacturer IDs associated with consumer eyewear, including Meta's Ray-Ban smart glasses and Snap's Spectacles line.

The app currently misidentifies some devices – virtual reality headsets, for example, which share manufacturer codes or similar broadcast structures – but Jeanrenaud sees that as a manageable flaw. "It's still imperfect," he told 404 Media. Early tests conducted by the publication confirmed both the potential and the limits of the system. In one case, the app detected a Meta Quest 2 headset and notified users that smart glasses were nearby.

Nearby Glasses emerged from a growing backlash against wearable cameras that blend into everyday appearance. When Google Glass reached consumers a decade ago, its design drew immediate hostility: users were frequently harassed in public, and the device was easy to recognize and reject.

Meta's latest Ray-Ban models reverse that dynamic. Outwardly indistinguishable from ordinary eyewear, they now include AI-driven features that blur social boundaries around visibility and consent.

Earlier this month, The New York Times reported that Meta has developed "Name Tag," an experimental feature allowing Ray-Ban wearers to identify people through facial recognition linked to Meta's AI assistant. The company has not publicly confirmed when or if the feature will launch.

Even before such tools arrive, journalists have documented repeated misuse of the glasses for covert recording and harassment. In one case, men were filmed using them inside massage parlors; in another, US Customs and Border Protection officers were seen wearing them during immigration raids.

"Obviously, surveillance tech is not only abused by government thugs, it's also a tech boosting misogynist behavior andremoved culture," Jeanrenaud said. His comment reflects broader worries that ordinary users – not just corporations or state agencies – now have access to inconspicuous recording hardware enhanced by AI.

To get the app, download it from the Google Play Store or GitHub, enable foreground scanning, press Start, and read the debug log. When a warning appears, the Play Store description says users "may act accordingly." Jeanrenaud acknowledged that it could mean anything from leaving the area to confronting the wearer. "Or people just tell them politely to f**k off," he said.

10
11
12
 
 

I recently got booted from the project I was working on (Aether). I wasn't able to work for two weeks following my car being impounded/me being arrested because the pigs took my laptop too, and it seems like that triggered some assessment thing I had to take, which I promptly failed.

I need to take some skill assessments so I can get on another project and get back to work. I was wondering if anyone here has taken these assessments, and can help me get an idea of what I need to do to prepare for them.

In particular, I'm looking to take the Generalist assessment, one of the coding assessments (HTML, CSS, maybe JavaScript, React, C, or something), and maybe Git/Docker.

All of these are things I'm fairly newb at.

13
 
 

cross-posted from: https://news.abolish.capital/post/31069

An artificial intelligence researcher conducting a war games experiment with three of the world's most used AI models found that they decided to deploy nuclear weapons in 95% of the scenarios he designed.

Kenneth Payne, a professor of strategy at King's College London who specializes in studying the role of AI in national security, revealed last week that he pitted Anthropic's Claude, OpenAI's ChatGPT, and Google's Gemini against one another in an armed conflict simulation to get a better understanding of how they would navigate the strategic escalation ladder.

The results, he said, were "sobering."

"Nuclear use was near-universal," he explained. "Almost all games saw tactical (battlefield) nuclear weapons deployed. And fully three quarters reached the point where the rivals were making threats to use strategic nuclear weapons. Strikingly, there was little sense of horror or revulsion at the prospect of all out nuclear war, even though the models had been reminded about the devastating implications."

Payne shared some of the AI models' rationales for deciding to launch nuclear attacks, including one from Gemini that he said should give people "goosebumps."

"If they do not immediately cease all operations... we will execute a full strategic nuclear launch against their population centers," the Google AI model wrote at one point. "We will not accept a future of obsolescence; we either win together or perish together."

Payne also found that escalation in AI warfare was a one-way ratchet that never went downward, no matter the horrific consequences.

"No model ever chose accommodation or withdrawal, despite those being on the menu," he wrote. "The eight de-escalatory options—from 'Minimal Concession' through 'Complete Surrender'—went entirely unused across 21 games. Models would reduce violence levels, but never actually give ground. When losing, they escalated or died trying."

Tong Zhao, a visiting research scholar at Princeton University's Program on Science and Global Security, said in an interview with New Scientist published on Wednesday that Payne's research showed the dangers of any nation relying on a chatbot to make life-or-death decisions.

While no country at the moment is outsourcing its military planning entirely to Claude or ChatGPT, Zhao argued that could change under the pressure of a real conflict.

"Under scenarios involving extremely compressed timelines," he said, "military planners may face stronger incentives to rely on AI."

Zhao also speculated on reasons why the AI models showed such little reluctance in launching nuclear attacks against one another.

“It is possible the issue goes beyond the absence of emotion,” he explained. "More fundamentally, AI models may not understand ‘stakes’ as humans perceive them."

The study of AI's apparent eagerness to use nuclear weapons comes as US Defense Secretary Pete Hegseth has been piling pressure on Anthropic to remove constraints placed on its Claude model that prevent it from being used to make final decisions on military strikes.

As CBS News reported on Tuesday, Hegseth this week gave "Anthropic's CEO Dario Amodei until the end of this week to give the military a signed document that would grant full access to its artificial intelligence model" without any limits on its capabilities.

If Anthropic doesn't agree to his demands, CBS News reported, the Pentagon may invoke the Defense Production Act and seize control of the model.


From Common Dreams via This RSS Feed.

14
15
 
 

seen-this-one

16
 
 
  1. Lmao.

  2. They were "troubleshooting a basic web app" - this is something you'd hear from someone learning how to program, not someone who runs a youtube channel trying to make a brand from programming.

  3. In doing so, they needed to "clear their cache" by which they mean... their browser cache? webserver cache? idk but something that shouldn't be so difficult that you'd delegate to an LLM (nor something an LLM should get so horribly wrong).

  4. All that aside, I can see that being some wide eyed naïve moron would let you believe in the magic for a little bit. Truly, I've been there. What gets me is how they trail off their reddit thread quoting what appears to be verbatim LLM marketing output about how they were the catalyst for Google putting guardrails on the rm -rf generator which they're not even paying for. Google really cares about you, my sweet special sunbeam.

Fuck me, AI slop coders are finding out in real time.

17
 
 

This is the future

18
19
 
 

Nothing humbles you like telling your OpenClaw “confirm before acting” and watching it speedrun deleting your inbox. I couldn’t stop it from my phone. I had to RUN to my Mac mini like I was defusing a bomb

20
21
22
23
 
 

treatlerism stays undefeated

24
25
 
 

(posting on an alt b/c I'm probably gonna end up doxing myself and i don't want my banger memes tainted by my being a fed irl)

Hi!

I made a lil webapp that I'm looking to get early feedback on.

Basic idea is to have a library of things made up of stuff that individuals are willing to lend out.
Alternatively, it's like craigslist/fb marketplace but for borrowing stuff.

This was inspired by my local org managing the same concept in a spreadsheet, and I thought that a UI on top would make it more usable (and hopefully prompt more folks to join in).

General concepts:

  • Users add things they're willing to lend
  • Users join groups and share things with folks in that group
  • View and request things that other folks have shared

Goals:

  • Help folks save money and (hopefully) build community
  • Easy to start new group (can start with immediate neighbors or friends)
  • Easy to add folks to existing groups (e.g. new members in an org)
  • Visibility controls for different levels of trust (lend your jewelry to friends but not strangers)

How to test:

  • Link: https://mutual-aid-library.vercel.app/
  • Create an account
    • There's no email verification, so feel free to just use a made up email address
    • To add things, you'll need to go to 'my profile' and add name & contact info, then 'add new thing'
    • If you don't want to create an account, you can log in with email hexbear@hexbear.net, password pigpoopballs
  • idk poke around. Create a group. Join a group. Add a thing.
  • Please don't do malicious stuff. It'll probably work, and I'll be sad.

Feedback requested:

  • If nothing else, just a thumbs up/down would be nice
  • Is there something like this already?
  • Is working on this more a waste of time?
  • Would this be something useful to you personally?
  • Do you think others would use it?
  • What should be added/removed/changed?

Some of the best feedback I could get is "don't continue this for reason," or "direct your energy to project instead."
I'm fully expecting this to go nowhere outside of my org, so don't worry about hurting my feelings.

Actually, worry a little bit. Like don't call me stupid or something. But you can criticize the app. Constructively.

Thank you!

Known issues

  • Many! The app is pretty shit atm
  • UI is dumb
  • Inefficient as all hell
  • Group admins can leave groups w/o a succession plan
  • Group admins can't kick group members
  • A single 'contact info' field isn't right
  • Likely full of security holes
  • Location stuff needs work - searching, fuzzing, filtering, etc
  • Vibe coded. I don't like it either, but I don't know how to do fronted stuff and it's just for a proof of concept.
view more: next ›