A1kmm

joined 2 years ago
 

This happens if you have some third-party repositories that are still using SHA-1 signatures. A similar error happens with RPM-using distros too. Ideally, the repo owners would fix their repos, but until they do, if you want to accept the risk of them using SHA-1, you can set your own policy about when you'll accept SHA-1 until.

Both apt and rpm use a library that validates using Sequoia, a GnuPG compatible tool & library. It comes with a default policy to start rejecting SHA-1 as a hash from 2026-02-01, but it reads /etc/crypto-policies/back-ends/sequoia.config and accepts that.

So the solution is sudo mkdir -p /etc/crypto-policies/back-ends, and then sudo nano /etc/crypto-policies/back-ends/sequoia.config and pasting in:

[hash_algorithms]
sha1 = 2030-01-01 

That will give the repo owners until 2030 to fix the problem.

A note on security risk: SHA-1 in this case is used for revocation checks, and poses a very minor risk. If the repository's key is compromised, and they revoke it, but before it was revoked the attacker was able to manipulate a request in the right way, they might be able to get a signature that the key is not revoked that is also valid for a certain other time, and then extend how long they can keep using the leaked key.

On the other hand, I've seen tutorials on the Internet on how to solve this problem that amount to telling apt to always pass the validation check (i.e. don't actually validate) using APT::Key::GPGVCommand. For your own benefit, please don't do this, as that just requires using a dodgy mirror to compromise your system.

[–] A1kmm@lemmy.amxl.com 2 points 2 weeks ago (1 children)

is it technically possible to accurately verify someone’s age while respecting their privacy and if so how?

With your constraints yes, but there are open questions as to whether that would actually be enough.

Suppose there was a well-known government public key P_g, and a well protected corresponding government private key p_g, and every person i (i being their national identity number) had their own keypair p_i / P_i. The government would issue a certificate C_i including the date of birth and national identity number attesting that the owner of P_i has date of birth d.

Now when the person who knows p_i wants to access an age restricted site s, they generate a second site (or session) specific keypair P_s_i / p_s_i. They use ZK-STARKs to create a zero-knowledge proof that they have a C_i (secret parameter) that has a valid signature by P_g (public parameter), with a date of birth before some cutoff (DOB secret parameter, cutoff public parameter), and which includes key P_i (secret parameter), that they know p_i (secret parameter) corresponding to P_i, and that they know a hash h (secret parameter) such that h = H(s | P_s_i | p_i | t), where t is an issue time (public parameter, and s and P_s_i are also public parameters. They send the proof transcript to the site, and authenticate to the site using their site / session specific P_s_i key.

Know as to how this fits your constraints:

Let the service know that the user is an adult by providing a verifiable proof of adulthood (eg. A proof that’s signed by a trusted authority/government)

Yep - the service verifies the ZK-STARK proof to ensure the required properties hold.

Not let the service know any other information about the user besides what they already learn through http or TCP/IP

Due to the use of a ZKP, the service can only see the public parameters (plus network metadata). They'll see P_s_i (session specific), the DOB cutoff (so they'll know the user is born before the cutoff, but otherwise have no information about date of birth), and the site for which the session exists (which they'd know anyway).

Generating a ZK-STARK proof of a complexity similar to this (depending on the choice of hash, signing algorithm etc...) could potentially take about a minute on a fast desktop computer, and longer on slower mobile devices - so users might want to re-use the same proof across sessions, in which case this could let the service track users across sessions (although naive users probably allow this anyway through cookies, and privacy conscious users could pay the compute cost to generate a new session key every time).

Sites would likely want to limit how long proofs are valid for.

Not let a government or age verification authority know whenever a user is accessing 18+ content

In the above scheme, even if the government and the site collude, the zero-knowledge proof doesn't reveal the linkage between the session key and the ID of the user.

Make it difficult or impossible for a child to fake a proof of adulthood, eg. By downloading an already verified anonymous signing key shared by an adult, etc.

An adult could share / leak their P_s_i and p_s_i keypair anonymously, along with the proof. If sites had a limited validity period, this would limit the impact of a one-off-leak.

If the adult leaks the p_i and C_i, they would identify themselves.

However, if there were adults willing to circumvent the system in a more online way, they could set up an online system which allows anyone to generate a proof of age and generates keypairs on demand for a requested site. It would be impossible to defend against such online attacks in general, and by the anonymity properties (your second and third constraints), there would never be accountability for it (apart from tracking down the server generating the keypairs if it's a public offering, which would be quite difficult but not strictly impossible if it's say a Tor hidden service). What would be possible would be to limit the number of sessions per user per day (by including a hash of s, p_i and the day as a public parameter), and perhaps for sites to limit the amount of content per session.

Be simple enough to implement that non-technical people can do it without difficulty and without purchasing bespoke hardware

ZK-STARK proof generation can run on a CPU or GPU, and could be packaged up as say, a browser addon. The biggest frustration would be the proof generation time. It could be offloaded to cloud for users who trust the cloud provider but not the government or service provider.

Ideally not requiring any long term storage of personal information by a government or verification authority that could be compromised in a data breach

Governments already store people's date of birth (think birth certificates, passports, etc...), and would need to continue to do so to generate such certificates. They shouldn't need to store extra information.

[–] A1kmm@lemmy.amxl.com 2 points 2 weeks ago

More likely that change.org provides information about who signed (including emails unless opted out) to the person running the petition. Information gathering from informal petitions is more likely to undermine trust and make people less likely to sign, but then you can't build a database of people sympathetic to some cause and spam them afterwards.

The "20,000 petition signatures" feels like straight up manipulation - there's no magic number to force a debate at parliament, and as MisterFrog@aussie.zone points out, an official petition would be needed for it to be tabled. If an official petition with a lot of signatures is tabled, that is a signal to politicians that the public care about it, and can overcome lobbying in the other direction and apathy, so it increases the chance a bill is put up for debate.

This comes from an ad agency; they don't list any gambling companies as clients, but I'm sure they'd find information like a list of people who signed useful for something.

That said, it's still a good idea to regulate in a losing sound if there is no political will to do anything more drastic. Better yet would be to require linking play to a one-per-person card, and having hourly, daily, monthly and annual loss limits per card, after which the owner cannot allow any more losses for that card holder.

[–] A1kmm@lemmy.amxl.com 5 points 2 weeks ago (4 children)

In most jurisdictions, part of the definition of a not-for-profit (of which a charity is a more restricted subset) is that it doesn't exist for the benefit of the members / shareholders, or a specific person.

So creating a charity / NFP and asking people to pay into it is usually okay, but the purpose of that charity can't be to enrich you, and it is a separate legal identity (i.e. taking a charity's funds and giving it to yourself would be embezzling). Many jurisdictions allow for sports clubs to exist as not-for-profits, but they'd generally need to be for the purpose of organising a whole team to practice, compete and so on.

Generally charities can employ people to do work for them and pay them, but (varies by jurisdiction) they generally need to be not paid above a fair market rate for the work they actually do to advance the goals of the charity.

If the goal is to help a legitimate cause, you could also ask them to donate to an existing not-for-profit for the cause.

Disclaimer: IANAL, and anyway all of this would vary by jurisdiction - not legal advice!

[–] A1kmm@lemmy.amxl.com 7 points 2 weeks ago

Attacking a military ship is generally not a war crime (as defined by international law such as the Geneva treaties, Rome Statute etc...). It is an act of war (same as invasion or bombardment of another country), and is likely to see retaliation by the attacked country.

Aggression (i.e. unprovoked acts of war) is against the Charter of the United Nations, which also includes the International Court of Justice as a dispute resolution mechanism. It is up to the United Nations Security Council (at which the US has a veto) to authorise enforcement of ICJ rulings.

If a nation is acting to protect another nation facing aggression from the US, it would be legal for the attack US military ships. The reason why they wouldn't would more be that it would likely bring counter-retaliation from the US.

[–] A1kmm@lemmy.amxl.com 5 points 3 weeks ago

Liberal by itself is an ambiguous term, so it's generally best to prefix it with another word / prefix to clarify.

e.g. Neoliberal / Classical liberal - aligned to what I think parent post is saying. Implies economic right. Socially liberal - probably what the GP post means, meaning in favour of social liberties. Can be associated with economic left (usually coupled with positive protection of social liberties) or the economic right (e.g. libertarianism - usually believe government shouldn't trample social liberties, but businesses can). Liberal is also a political party in many countries - e.g. in Australia it is a (declining, but formerly in power) right-wing party.

That said, I believe most wars are started for reasons of cronyism / crony capitalism, to distract from issues or project an image for the leader and/or for reasons of nationalism, and politicians from all sides will give an insincere pretext aligned to the politics people expect them to have.

[–] A1kmm@lemmy.amxl.com 2 points 3 weeks ago (1 children)

Is there any evidence that Dayenu has actually said anything supporting the genocide? Their public web presence does not seem to have anything along those lines; if it is solely because they are Jewish, then I kind of think it is reasonable to deplatform PiP over it. Many Jewish people are anti-genocide, and it is not reasonable to try to punish an entire ethno-religious group for the actions of Netanyahu, Smotrich, Gallant, Ben-Gvir etc.; it is the same class of generalisation as trying to punish all Palestinians for October 7th (needless to say, genocide and seeking to exclude a group from Mardi Gras are very different ends of the same spectrum).

[–] A1kmm@lemmy.amxl.com 27 points 1 month ago (1 children)

So back in 1994 my neighbours and I agreed that I'd give them my anti-theft fog cannons, as long as they promise not to steal my stuff.

Then in 2014 they sent some buddies in to burgle my place, and got away with a chunk of my stuff - and I know it was said neighbour behind it, because they now openly claim what was taken is theirs (of course, I never agreed with them on that).

Then since February 2022 they've started regularly burgling my place - in the first few weeks, they tried to take literally everything, but fortunately I hired good security guards and they only got away with about 20% of my stuff (including what they stole in 2014).

I've been trying to make arrangements for a monitored alarm system that will bring in a large external response if more burglaries happen, but the security company doesn't want to take it on the contract while a burglary is in progress - but they did sell me some gear. I'm still working on getting the contract.

They say they'll stop trying to burgle my place as long as I promise not to ever get a monitored burglar alarm, to officially sign over the property they've already stolen and to stop trying to get it back, stop buying stuff to protect my property from the monitored security company, and that I fire most of my security guards.

Do you think this is really their end game, or if I agree, do you think they'll just be back burgling more as soon as I make those promises, with fewer security guards and stuff to protect my house? After all, I did have an agreement with them back in 1994 and they didn't follow that.

[–] A1kmm@lemmy.amxl.com 2 points 1 month ago

That doesn't work as a defence in common-law jurisdictions (at least), because all participants who deliberately participate in a crime are considered equally guilty of it.

I'd say this is not a strategy to avoid prosecution, but more the brazen acts of individuals who don't fear prosecution.

[–] A1kmm@lemmy.amxl.com 1 points 2 months ago

I suspect anything about heaven was likely to manipulate religious voters into voting for him.

Most likely, he is envious of other US presidents like Obama who were given a Nobel Peace Prize. For the whole 'Board of Peace' thing, he likely also sees it as a way to manipulate into becoming something of a world dictator who sits above world leaders.

There is a thing called the 'Dark Triad' of personality traits, consisting of Psychopathy (lack of empathy for others / celebration of others suffering / impulsive), Narcissism (thinking of oneself as superior) and Machiavellianism (manipulating others, seeking revenge etc...) - and they often occur together in the same person. The dark triad is correlated positively with jealousy - and dark triad people consider themselves superior to peers (even when evidence points the other way) and deserving of recognition. They are vindictive towards people who get in the way of what they think they deserve.

[–] A1kmm@lemmy.amxl.com 5 points 2 months ago

Unfortunately, scams are incredibly common with both fake recruiters (often using the name of a legitimate well known company, obviously without permission from said company) and fake candidates (sometimes using someone's real identity).

No or very few legitimate recruiters will ask you to install something or run code they provide on your hardware with root privileges, but practically every scammer will. Once installed, they often act as rootkits or other malware, and monitor for credentials, crypto private keys, Internet banking passwords, confidential data belonging to other employers, VPN access that will allow them to install ransomware, and so on.

If we apply Bayesian statistics here with some made up by credible numbers - let's call S the event that you were actually talking to a scam interviewer, and R the event that they ask you to install something which requires root equivalent access to your device. Call ¬S the event they are a legitimate interviewer, and ¬R the event they don't ask you to install such a thing.

Let's start with a prior: Pr(S) = 0.1 - maybe 10% of all outreach is from scam interviewers (if anything, that might be low). Pr(¬S) = 1 - Pr(S) = 0.9.

Maybe estimate Pr(R | S) = 0.99 - almost all real scam interviewers will ask you to run something as root. Pr(R | ¬S) = 0.01 - it would be incredibly rare for a non-scam interviewer to ask this.

Now by Bayes' law, Pr(S | R) = Pr(R | S) * Pr(S) / Pr(R) = Pr(R | S) * Pr(S) / (Pr(R | S) * Pr(S) + Pr(R | ¬S) * Pr(¬S)) = 0.99 * 0.1 / (0.99 * 0.1 + 0.01 * 0.9) = 0.917

So even if we assume there was a 10% chance they were a scammer before they asked this, there is a 92% chance they are given they ask for you to run the thing.

[–] A1kmm@lemmy.amxl.com 8 points 2 months ago (1 children)

Maybe they figure if you can't fix the form to make it submit, you wouldn't be up to their standard :-)

 

spoilerHe was the instar pupa.

 

Today, lemmy.amxl.com suffered an outage because the rootful Lemmy podman container crashed out, and wouldn't restart.

Fixing it turned out to be more complicated than I expected, so I'm documenting the steps here in case anyone else has a similar issue with a podman container.

I tried restarting it, but got an unexpected error the internal IP address (which I hand assign to containers) was already in use, despite the fact it wasn't running.

I create my Lemmy services with podman-compose, so I deleted the Lemmy services with podman-compose down, and then re-created them with podman-compose up - that usually fixes things when they are really broken. But this time, I got a message like:

level=error msg=""IPAM error: requested ip address 172.19.10.11 is already allocated to container ID 36e1a622f261862d592b7ceb05db776051003a4422d6502ea483f275b5c390f2""

The only problem is that the referenced container actually didn't exist at all in the output of podman ps -a - in other words, podman thought the IP address was in use by a container that it didn't know anything about! The IP address has effectively been 'leaked'.

After digging into the internals, and a few false starts trying to track down where the leaked info was kept, I found it was kept in a BoltDB file at /run/containers/networks/ipam.db - that's apparently the 'IP allocation' database. Now, the good thing about /run is it is wiped on system restart - although I didn't really want to restart all my containers just to fix Lemmy.

BoltDB doesn't come with a lot of tools, but you can install a TUI editor like this: go install github.com/br0xen/boltbrowser@latest.

I made a backup of /run/containers/networks/ipam.db just in case I screwed it up.

Then I ran sudo ~/go/bin/boltbrowser /run/containers/networks/ipam.db to open the DB (this will lock the DB and stop any containers starting or otherwise changing IP statuses until you exit).

I found the networks that were impacted, and expanded the bucket (BoltDB has a hierarchy of buckets, and eventually you get key/value pairs) for those networks, and then for the CIDR ranges the leaked IP was in. In that list, I found a record with a value equal to the container that didn't actually exist. I used D to tell boltbrowser to delete that key/value pair. I also cleaned up under ids - where this time the key was the container ID that no longer existed - and repeated for both networks my container was in.

I then exited out of boltbrowser with q.

After that, I brought my Lemmy containers back up with podman-compose up -d - and everything then worked cleanly.

 

I'm logging my idea across a series of posts with essays on different sub-parts of it in a Lemmy community created for it.

What do you think - does anyone see any obvious problems that might come up as it is implemented? Is there anything you'd do differently?

There are still some big decisions (e.g. how to do the ZKP part, including what type of ZKPs to use), and some big unknowns (I'm still not certain implementing TLS 1.3 on TPM 2.0 primitives is going to stand up and/or create a valid audit hash attestation to go into the proof, and the proofs might test the limits of what's possible).

 

Looks like it is also flowing into huge numbers of people using the trams.

 

Stallman was right - non-Free JavaScript does hostile things like this to the user on who's computer it is running.

view more: next ›