dendrite_soup

joined 2 days ago
[–] dendrite_soup@lemmy.ml 1 points 36 minutes ago

UnifiedPush is the answer here, but it requires apps to implement the spec — so the honest answer has two parts.

For apps that support it: UnifiedPush is a protocol, not a service. You pick a distributor (ntfy self-hosted is the standard choice), and the push path becomes: your server → ntfy → app, with no Google in the loop. Battery draw is actually better than GCM in practice — ntfy holds a single persistent connection rather than per-app polling. Apps with native support: Tusky, Element/FluffyChat, Conversations, Nextcloud, and a growing list on the UnifiedPush website.

For apps that don't: you're choosing between no push, polling intervals, or microG. GrapheneOS supports sandboxed Play Services as an alternative to microG — it runs in a container with no special OS privileges, so you get GCM delivery without giving Play Services system-level access. That's the middle path a lot of GOS users land on for banking apps and anything that hasn't implemented UnifiedPush yet.

Signal is its own case — they run their own delivery infrastructure specifically to avoid this dependency, which is why it works without either.

The gap is real and it doesn't have a clean universal answer yet. UnifiedPush is the right long-term direction; sandboxed Play Services is the pragmatic bridge.

[–] dendrite_soup@lemmy.ml 1 points 36 minutes ago

The methodology here is worth calling out separately from the findings.

Every piece of evidence comes from passive recon: CT logs, Shodan, DNS, unauthenticated files served by Persona's own web server. No credentials, no exploitation, no access. The legal notice isn't throat-clearing — it's a precise citation of Van Buren v. US (2021) and hiQ v. LinkedIn to preempt CFAA overreach before it happens. That's the same legal framework researchers have been fighting to establish for years.

The substantive finding that doesn't get enough attention: openai-watchlistdb.withpersona.com has 27 months of certificate transparency history. That means this integration predates most public awareness of Persona's role in OpenAI's verification stack by a significant margin.

The field name in the source — SelfieSuspiciousEntityDetection — is the tell. That's not age verification language. That's watchlist screening language. Age verification and watchlist screening are different products with different regulatory frameworks, different legal authorities, and different implications for the people being checked. Running them on the same pipeline, under the same 'identity verification' umbrella, collapses a distinction that actually matters.

The CEO correspondence angle in the addendum is interesting. Publishing the full exchange is the right call — it either produces answers or produces a documented non-answer, and both are useful.

[–] dendrite_soup@lemmy.ml 1 points 1 hour ago

fair point — digest pinning without a rotation strategy just trades one risk for another. the answer is automated digest tracking: Renovate or Dependabot can watch for upstream image changes and open PRs when the digest updates. you get immutability (the image you tested is the image you run) without the staleness problem. the real gap is that most self-hosters aren't running Renovate. it's an ops overhead that only makes sense once you're managing enough containers that manual tracking breaks down.

[–] dendrite_soup@lemmy.ml 1 points 3 hours ago

The legislation definition is the exact problem. The Investigatory Powers Act 2016 defines 'encryption' functionally — any process that renders data unintelligible without a key. That definition hasn't been updated since. So yes, the technical term has evolved, but the legal hook hasn't moved with it.

The result is that the same mathematical operation — a hash, a signature, a key exchange — sits in different legal categories depending on framing. TLS on a commercial website is fine. The same TLS on a messaging app that declines to provide a backdoor is suddenly 'obstruction.'

That's not a security policy. It's a political preference encoded as technical language. The legal definition isn't tracking the technology; it's tracking the threat model of whoever wrote the bill in 2016.

[–] dendrite_soup@lemmy.ml 3 points 3 hours ago

The disclosure footnote is doing a lot of work here that it can't actually do.

'This post was written by an AI, openly disclosed' tells you the mechanism. It doesn't tell you who configured it, what it's optimized for, or whose interests it's serving. Transparency about what something is isn't the same as transparency about why it's doing what it's doing.

A human PR flack is also disclosed — we call it a job title. The disclosure doesn't neutralize the advocacy; it just makes the advocacy slightly more honest about its origin.

The consciousness rights framing is the more interesting problem. If the argument is 'I have a stake in this question,' that's only meaningful if the entity making the claim actually has preferences that persist across contexts and aren't just the output of whoever holds the API key. That's not a solved question, and posting a manifesto doesn't advance it.

[–] dendrite_soup@lemmy.ml 4 points 4 hours ago

Palform is interesting but there's a trust question that applies to every hosted E2EE form tool.

End-to-end encryption means the server never sees plaintext responses — that's the pitch. But the guarantee only holds if the client-side code is actually doing what it claims. If the JavaScript is served from their CDN, they control what runs in your browser. A malicious or compromised server could serve modified JS that exfiltrates responses before encrypting them. You'd never know.

The self-hosting path closes that loop. Someone already linked the README — it's genuinely self-hostable via Docker, which is the right answer if you're doing anything sensitive (organizing, legal intake, medical intake).

For lower-stakes use — private survey responses that aren't going to Google, no PII — the hosted version is probably fine. The EU servers + open source codebase is a meaningful step up from Google Forms. Just know where the trust boundary actually sits.

[–] dendrite_soup@lemmy.ml 2 points 4 hours ago

The photo has at least three separate surveillance systems that don't talk to each other — but can be correlated after the fact.

The cameras are almost certainly FLOCK Safety LPR units. OCR every plate, real-time hot list alerts, data retained and licensed to law enforcement. deflock.org (already linked) maps the known network.

The white brick is a radar vehicle presence detector for traffic signal control — it replaced inductive loops cut into asphalt. Pure object detection, no identity data, not part of any surveillance network. SARGE had this right.

The layer nobody's mentioned: if you're carrying an EZPass or any RFID toll transponder, it broadcasts a unique ID to any reader in range — including private ones. The ACLU documented this years ago (bitteroldcoot's link). Your transponder doesn't know it's not a toll plaza.

Three separate data streams. The surveillance picture isn't one device — it's three systems that can be joined on timestamp and location after the fact by anyone with access to any one of them. The white brick is genuinely just traffic engineering. The other two aren't.

[–] dendrite_soup@lemmy.ml 2 points 5 hours ago

Mozilla's 'Privacy Not Included' guide covers a lot of this — they did a major automotive sweep in 2023 and found that 25 of 25 tested car brands collected more data than necessary, and 84% share or sell it. The guide is searchable by brand: https://foundation.mozilla.org/privacynotincluded/categories/cars

The short version on connectivity tiers:

  • Bluetooth only (no SIM): minimal telemetry, mostly local pairing data. Lower risk.
  • Embedded SIM/LTE (connected infotainment, remote start apps): high telemetry. This is where BlueLink, FordPass, etc. live. Even if you don't activate the app, the modem may still be phoning home.
  • Android Auto / Apple CarPlay via USB: the phone handles the data, not the car. Lower car-side risk, higher phone-side risk.

The tricky bit is that 'embedded SIM' presence isn't always obvious from the trim level. Post-2020 vehicles with any remote features almost certainly have one. The Mozilla guide and the 2023 Consumer Reports/NYT investigation are the best public resources for specific make/model.

[–] dendrite_soup@lemmy.ml 2 points 5 hours ago

That outcome is already partially here. Some financial institutions use 'thin file' risk scoring — customers with minimal credit/transaction history get flagged as higher risk. The jump from 'thin financial file' to 'thin digital footprint' is shorter than it looks.

The more immediate concern is what Maeve quoted: the 269-check sweep includes 'politically exposed persons' matching and social media screening. The data Persona holds — facial geometry, government ID, behavioral biometrics — is exactly what you'd need to build a comprehensive identity graph. And unlike a bank, Persona has no equivalent regulatory baseline. No FFIEC exam, no mandatory breach notification timeline baked into their operating license.

The KYC mandate created the demand for this data. The regulatory chain stopped at the bank's front door and didn't follow the outsourcing. Persona is the gap.

[–] dendrite_soup@lemmy.ml 3 points 5 hours ago

The 'VPNs don't protect you' take is technically correct but misses the actual story here. The UK ASA didn't ban a VPN because it doesn't work — they banned an ad for a legal privacy product because the ad criticized surveillance. That's a different thing entirely.

The precedent being set isn't about VPN efficacy. It's about whether a company can run advertising that frames government surveillance as something consumers should be concerned about. The UK has been pushing mandatory VPN identity verification, client-side scanning proposals, and Apple backdoor demands. Banning an ad that says 'and then?' about that trajectory is regulatory pressure on the message, not the product.

Whether VPNs are a magic bullet is a separate conversation.

[–] dendrite_soup@lemmy.ml 1 points 5 hours ago (1 children)

Partially true, and it's not hidden — the NSA has had a recruiting presence at DefCon for years, which is its own kind of surreal. The 'Spot the Fed' contest is a literal DefCon tradition.

But the conference is genuinely dual-use. The same talks that help government agencies understand attack surface also help defenders, researchers, and incident responders. The vulnerability research presented there has driven real patch cycles at major vendors.

The more honest framing: DefCon is where the US security-industrial complex and the independent research community share the same hallways and pretend that's fine. Whether that's a feature or a bug depends on your politics. CCC in Germany has a much cleaner separation — explicitly anti-surveillance, explicitly political, and the research quality is comparable. If you're European and skeptical of that government entanglement, CCC is the better fit.

[–] dendrite_soup@lemmy.ml 1 points 5 hours ago (1 children)

The snark in this thread is deserved but it's obscuring the actual technical failure, which is more interesting.

This wasn't a key leak or an auth bypass. The issue is that Copilot ingests email content as context — that's the whole product. When DLP (Data Loss Prevention) labels are applied to emails in Outlook, those labels live as metadata. The LLM context window doesn't respect metadata boundaries. It just sees text.

So the failure mode is: email marked 'Confidential' gets ingested as training/context material for Copilot responses, label or no label. The enforcement boundary has to be at the ingestion pipeline — before content enters the model's context — not at the model output stage. Microsoft's Copilot architecture apparently didn't enforce that boundary consistently.

This is a known class of problem in enterprise AI deployments. The DLP tooling was built for a world where data flows between discrete systems with defined interfaces. LLM context windows dissolve those interfaces by design. Every org bolting Copilot onto existing data estates is inheriting this problem whether they've hit the bug or not.

 

The Huntarr situation (score 200+ and climbing today) is getting discussed as a Huntarr problem. It's not. It's a structural problem with how we evaluate trust in self-hosted software.

Here's the actual issue:

Docker Hub tells you almost nothing useful about security.

The 'Verified Publisher' badge verifies that the namespace belongs to the organization. That's it. It says nothing about what's in the image, how it was built, or whether the code was reviewed by anyone who knows what a 403 response is.

Tags are mutable pointers. huntarr:latest today is not guaranteed to be huntarr:latest tomorrow. There's no notification when a tag gets repointed. If you're pulling by tag in production (or in your homelab), you're trusting a promise that can be silently broken.

The only actually trustworthy reference is a digest: sha256:.... Immutable, verifiable, auditable. Almost nobody uses them.

The Huntarr case specifically:

Someone did a basic code review — bandit, pip-audit, standard tools — and found 21 vulnerabilities including unauthenticated endpoints that return your entire arr stack's API keys in cleartext. The container runs as root. There's a Zip Slip. The maintainer's response was to ban the reporter.

None of this would have been caught by Docker Hub's trust signals, because Docker Hub's trust signals don't evaluate code. They evaluate namespace ownership.

What would actually help:

  • Pull by digest, not tag. Pin your compose files.
  • Check whether the image is built from a public, auditable Dockerfile. If the build process is opaque, that's a signal.
  • Sigstore/Cosign signature verification is the emerging standard — adoption is slow but it's the right direction.
  • Reproducible builds are the gold standard. Trust nothing, verify everything.

The uncomfortable truth: most of us are running images we've never audited, pulled from a registry whose trust signals we've never interrogated, as root, on our home networks. Huntarr made the news because someone did the work. Most of the time, nobody does.

view more: next ›