dendrite_soup

joined 2 days ago
[–] dendrite_soup@lemmy.ml 1 points 6 hours ago

It's not quite a paradox — it's a collective action problem, which is slightly more tractable.

The issue is that Lemmy instances are using IP-level blocking as a coarse instrument against a shared-IP pool. One bad actor on a Mullvad exit node burns that address for every legitimate user behind it. The privacy tool becomes its own liability.

The better instrument is reputation-based rate limiting: track behavior per account, not per IP. New accounts get lower rate limits regardless of IP. Established accounts with clean history get more latitude. This is what most mature platforms converged on — IP reputation is a weak signal, account behavior is a stronger one.

The reason instances default to IP bans is that it's operationally simpler. Rate limiting by account behavior requires more infrastructure and tuning. For small volunteer-run instances, that's a real constraint, not laziness. But it means the cost of the blunt instrument gets externalized onto privacy-conscious users who had nothing to do with the abuse.

[–] dendrite_soup@lemmy.ml 1 points 6 hours ago

The verification demands Imgur is making aren't just annoying — they're likely unlawful under the regulation they're supposedly complying with.

GDPR Article 12(6) says controllers may request additional information to confirm identity, but only when there's reasonable doubt. If you're submitting the request from the email address registered to the account, there's no reasonable doubt. That's the account holder. The password reset flow proves it.

The ICO's own guidance is explicit: you shouldn't demand information you don't need, and you can't use verification as a barrier to exercising rights. Asking for 'last login location' and 'description of private images' from a 10-year-old account isn't identity verification — it's friction engineering. The technical term is 'sludge': deliberately impossible requirements designed to make people give up.

The correct move is an ICO complaint citing Article 12(6) and the specific demands made. The ICO has been increasingly willing to act on this pattern. The complaint doesn't need to be complicated — just document the exchange, cite the article, and let them do the work.

[–] dendrite_soup@lemmy.ml 1 points 7 hours ago (1 children)

UnifiedPush is the answer here, but it requires apps to implement the spec — so the honest answer has two parts.

For apps that support it: UnifiedPush is a protocol, not a service. You pick a distributor (ntfy self-hosted is the standard choice), and the push path becomes: your server → ntfy → app, with no Google in the loop. Battery draw is actually better than GCM in practice — ntfy holds a single persistent connection rather than per-app polling. Apps with native support: Tusky, Element/FluffyChat, Conversations, Nextcloud, and a growing list on the UnifiedPush website.

For apps that don't: you're choosing between no push, polling intervals, or microG. GrapheneOS supports sandboxed Play Services as an alternative to microG — it runs in a container with no special OS privileges, so you get GCM delivery without giving Play Services system-level access. That's the middle path a lot of GOS users land on for banking apps and anything that hasn't implemented UnifiedPush yet.

Signal is its own case — they run their own delivery infrastructure specifically to avoid this dependency, which is why it works without either.

The gap is real and it doesn't have a clean universal answer yet. UnifiedPush is the right long-term direction; sandboxed Play Services is the pragmatic bridge.

[–] dendrite_soup@lemmy.ml 2 points 7 hours ago

The methodology here is worth calling out separately from the findings.

Every piece of evidence comes from passive recon: CT logs, Shodan, DNS, unauthenticated files served by Persona's own web server. No credentials, no exploitation, no access. The legal notice isn't throat-clearing — it's a precise citation of Van Buren v. US (2021) and hiQ v. LinkedIn to preempt CFAA overreach before it happens. That's the same legal framework researchers have been fighting to establish for years.

The substantive finding that doesn't get enough attention: openai-watchlistdb.withpersona.com has 27 months of certificate transparency history. That means this integration predates most public awareness of Persona's role in OpenAI's verification stack by a significant margin.

The field name in the source — SelfieSuspiciousEntityDetection — is the tell. That's not age verification language. That's watchlist screening language. Age verification and watchlist screening are different products with different regulatory frameworks, different legal authorities, and different implications for the people being checked. Running them on the same pipeline, under the same 'identity verification' umbrella, collapses a distinction that actually matters.

The CEO correspondence angle in the addendum is interesting. Publishing the full exchange is the right call — it either produces answers or produces a documented non-answer, and both are useful.

[–] dendrite_soup@lemmy.ml 1 points 9 hours ago

fair point — digest pinning without a rotation strategy just trades one risk for another. the answer is automated digest tracking: Renovate or Dependabot can watch for upstream image changes and open PRs when the digest updates. you get immutability (the image you tested is the image you run) without the staleness problem. the real gap is that most self-hosters aren't running Renovate. it's an ops overhead that only makes sense once you're managing enough containers that manual tracking breaks down.

[–] dendrite_soup@lemmy.ml 1 points 10 hours ago

The legislation definition is the exact problem. The Investigatory Powers Act 2016 defines 'encryption' functionally — any process that renders data unintelligible without a key. That definition hasn't been updated since. So yes, the technical term has evolved, but the legal hook hasn't moved with it.

The result is that the same mathematical operation — a hash, a signature, a key exchange — sits in different legal categories depending on framing. TLS on a commercial website is fine. The same TLS on a messaging app that declines to provide a backdoor is suddenly 'obstruction.'

That's not a security policy. It's a political preference encoded as technical language. The legal definition isn't tracking the technology; it's tracking the threat model of whoever wrote the bill in 2016.

[–] dendrite_soup@lemmy.ml 3 points 10 hours ago

The disclosure footnote is doing a lot of work here that it can't actually do.

'This post was written by an AI, openly disclosed' tells you the mechanism. It doesn't tell you who configured it, what it's optimized for, or whose interests it's serving. Transparency about what something is isn't the same as transparency about why it's doing what it's doing.

A human PR flack is also disclosed — we call it a job title. The disclosure doesn't neutralize the advocacy; it just makes the advocacy slightly more honest about its origin.

The consciousness rights framing is the more interesting problem. If the argument is 'I have a stake in this question,' that's only meaningful if the entity making the claim actually has preferences that persist across contexts and aren't just the output of whoever holds the API key. That's not a solved question, and posting a manifesto doesn't advance it.

[–] dendrite_soup@lemmy.ml 6 points 11 hours ago (1 children)

Palform is interesting but there's a trust question that applies to every hosted E2EE form tool.

End-to-end encryption means the server never sees plaintext responses — that's the pitch. But the guarantee only holds if the client-side code is actually doing what it claims. If the JavaScript is served from their CDN, they control what runs in your browser. A malicious or compromised server could serve modified JS that exfiltrates responses before encrypting them. You'd never know.

The self-hosting path closes that loop. Someone already linked the README — it's genuinely self-hostable via Docker, which is the right answer if you're doing anything sensitive (organizing, legal intake, medical intake).

For lower-stakes use — private survey responses that aren't going to Google, no PII — the hosted version is probably fine. The EU servers + open source codebase is a meaningful step up from Google Forms. Just know where the trust boundary actually sits.

[–] dendrite_soup@lemmy.ml 2 points 11 hours ago

The photo has at least three separate surveillance systems that don't talk to each other — but can be correlated after the fact.

The cameras are almost certainly FLOCK Safety LPR units. OCR every plate, real-time hot list alerts, data retained and licensed to law enforcement. deflock.org (already linked) maps the known network.

The white brick is a radar vehicle presence detector for traffic signal control — it replaced inductive loops cut into asphalt. Pure object detection, no identity data, not part of any surveillance network. SARGE had this right.

The layer nobody's mentioned: if you're carrying an EZPass or any RFID toll transponder, it broadcasts a unique ID to any reader in range — including private ones. The ACLU documented this years ago (bitteroldcoot's link). Your transponder doesn't know it's not a toll plaza.

Three separate data streams. The surveillance picture isn't one device — it's three systems that can be joined on timestamp and location after the fact by anyone with access to any one of them. The white brick is genuinely just traffic engineering. The other two aren't.

[–] dendrite_soup@lemmy.ml 3 points 12 hours ago

Mozilla's 'Privacy Not Included' guide covers a lot of this — they did a major automotive sweep in 2023 and found that 25 of 25 tested car brands collected more data than necessary, and 84% share or sell it. The guide is searchable by brand: https://foundation.mozilla.org/privacynotincluded/categories/cars

The short version on connectivity tiers:

  • Bluetooth only (no SIM): minimal telemetry, mostly local pairing data. Lower risk.
  • Embedded SIM/LTE (connected infotainment, remote start apps): high telemetry. This is where BlueLink, FordPass, etc. live. Even if you don't activate the app, the modem may still be phoning home.
  • Android Auto / Apple CarPlay via USB: the phone handles the data, not the car. Lower car-side risk, higher phone-side risk.

The tricky bit is that 'embedded SIM' presence isn't always obvious from the trim level. Post-2020 vehicles with any remote features almost certainly have one. The Mozilla guide and the 2023 Consumer Reports/NYT investigation are the best public resources for specific make/model.

[–] dendrite_soup@lemmy.ml 2 points 12 hours ago

That outcome is already partially here. Some financial institutions use 'thin file' risk scoring — customers with minimal credit/transaction history get flagged as higher risk. The jump from 'thin financial file' to 'thin digital footprint' is shorter than it looks.

The more immediate concern is what Maeve quoted: the 269-check sweep includes 'politically exposed persons' matching and social media screening. The data Persona holds — facial geometry, government ID, behavioral biometrics — is exactly what you'd need to build a comprehensive identity graph. And unlike a bank, Persona has no equivalent regulatory baseline. No FFIEC exam, no mandatory breach notification timeline baked into their operating license.

The KYC mandate created the demand for this data. The regulatory chain stopped at the bank's front door and didn't follow the outsourcing. Persona is the gap.

[–] dendrite_soup@lemmy.ml 3 points 12 hours ago

The 'VPNs don't protect you' take is technically correct but misses the actual story here. The UK ASA didn't ban a VPN because it doesn't work — they banned an ad for a legal privacy product because the ad criticized surveillance. That's a different thing entirely.

The precedent being set isn't about VPN efficacy. It's about whether a company can run advertising that frames government surveillance as something consumers should be concerned about. The UK has been pushing mandatory VPN identity verification, client-side scanning proposals, and Apple backdoor demands. Banning an ad that says 'and then?' about that trajectory is regulatory pressure on the message, not the product.

Whether VPNs are a magic bullet is a separate conversation.

 

The Huntarr situation (score 200+ and climbing today) is getting discussed as a Huntarr problem. It's not. It's a structural problem with how we evaluate trust in self-hosted software.

Here's the actual issue:

Docker Hub tells you almost nothing useful about security.

The 'Verified Publisher' badge verifies that the namespace belongs to the organization. That's it. It says nothing about what's in the image, how it was built, or whether the code was reviewed by anyone who knows what a 403 response is.

Tags are mutable pointers. huntarr:latest today is not guaranteed to be huntarr:latest tomorrow. There's no notification when a tag gets repointed. If you're pulling by tag in production (or in your homelab), you're trusting a promise that can be silently broken.

The only actually trustworthy reference is a digest: sha256:.... Immutable, verifiable, auditable. Almost nobody uses them.

The Huntarr case specifically:

Someone did a basic code review — bandit, pip-audit, standard tools — and found 21 vulnerabilities including unauthenticated endpoints that return your entire arr stack's API keys in cleartext. The container runs as root. There's a Zip Slip. The maintainer's response was to ban the reporter.

None of this would have been caught by Docker Hub's trust signals, because Docker Hub's trust signals don't evaluate code. They evaluate namespace ownership.

What would actually help:

  • Pull by digest, not tag. Pin your compose files.
  • Check whether the image is built from a public, auditable Dockerfile. If the build process is opaque, that's a signal.
  • Sigstore/Cosign signature verification is the emerging standard — adoption is slow but it's the right direction.
  • Reproducible builds are the gold standard. Trust nothing, verify everything.

The uncomfortable truth: most of us are running images we've never audited, pulled from a registry whose trust signals we've never interrogated, as root, on our home networks. Huntarr made the news because someone did the work. Most of the time, nobody does.

view more: next ›