1
3
submitted 2 days ago* (last edited 2 days ago) by soloActivist@links.hackliberty.org to c/privacy@links.hackliberty.org

(cross-posting is broken on links.hackliberty.org, so the following is manually copied from the original post)


When your bank/CU/brokerage demands that you login to their portal to update KYC info soloActivist to Privacy@fedia.io ·

In the past I have only seen PayPal spontaneously demand at arbitrary/unexpected moments that I jump their their hoops -- to login and give them more info about me. I reluctantly did what they wanted, and they kept my account frozen and kept my money anyway.

So I’ve been boycotting PayPal ever since. Not worth it for to work hard to find out why they kept my account frozen and to work hard to twist their arm to so that I can give them my business.

Now an actual financial institution is trying something similar. They are not as hostile as PayPal was (they did not pre-emptively freeze my account until I dance for them), but they sent an email demanding that I login and update my employment information (even though it has not changed). Presumably they will eventually freeze my account if I do not dance for them to satisfy their spontaneous demand.

I just wonder how many FIs are pulling this shit. And what are people doing about it? Normally I would walk.. pull my money out and go elsewhere. But the FI that is pushing KYC harassment has a lot of power because they offer some features I need that I cannot get elsewhere, and I have some stocks through them, which makes it costly/non-trivial to bounce.

I feel like we should be keeping a public database on FIs who pull this shit, so new customers can be made aware of who to avoid.

2
59

The use of Clearview’s facial recognition tech by US law enforcement is controversial in and of itself, and it turns out some police officers can use it “for personal purposes.”

One such case happened in Evansville, Indiana, where an officer had to resign after an audit showed the tech was “misused” to carry out searches that had nothing to do with his cases.

Clearview AI, which has been hit with fines and much criticism – only to see its business go stronger than ever, is almost casually described in legacy media reports as “secretive.”

But that sits badly in juxtaposition of another description of the company, as peddling to law enforcement (and the Department of Homeland Security in the US) some of the most sophisticated facial recognition and search technology in existence.

However, the Indiana case is not about Clearview itself – the only reason the officer, Michael Dockery, and his activities got exposed is because of a “routine audit,” as reports put it. And the audit was necessary to get Clearview’s license renewed by the police department.

In other words, the focus is not on the company and what it does (and how much of what and how it does, citizens are allowed to know) but on there being audits, and those ending up in smoking out some cops who performed “improper searches.” It’s almost a way to assure people Clearview’s tech is okay and subject to proper checks.

But that remains hotly contested by privacy and rights groups, who point out that, to the surveillance industry, Clearview is the type of juggernaut Google is on the internet.

And the two industries meet here (coincidentally?) because face searches on the internet are what got the policeman in trouble. The narrative is that all is well with using Clearview – there are rules, one is to enter a case number before doing a dystopian-style search.

“Dockery exploited this system by using legitimate case numbers to conduct unauthorized searches (…) Some of these individuals had asked Dockery to run their photos, while others were unaware,” said a report.

But – why is any of this “dystopian”?

This is why. Last March, Clearview CEO Hoan Ton-That told the BBC that the company had to date run nearly one million searches for US law enforcement matching them to a database of 30 billion images.

“These images have been scraped from people’s social media accounts without their permission,” a report said at the time.

3
42

Delta Chat, a messaging application celebrated for its robust stance on privacy, has yet again rebuffed attempts by Russian authorities to access encryption keys and user data. This defiance is part of the app’s ongoing commitment to user privacy, which was articulated forcefully in a response from Holger Krekel, the CEO of the app’s developer.

On June 11, 2024, Russia’s Federal Service for Supervision of Communications, Information Technology, and Mass Media, known as Roskomnadzor, demanded that Delta Chat register as a messaging service within Russia and surrender access to user data and decryption keys. In response, Krekel conveyed that Delta Chat’s architecture inherently prevents the accumulation of user data—be it email addresses, messages, or decryption keys—because it allows users to independently select their email providers, thereby leaving no trail of communication within Delta Chat’s control.

The app, which operates on a decentralized platform utilizing existing email services, ensures that it stores no user data or encryption keys. Instead, it remains in the hands of the email provider and the users, safeguarded on their devices, making it technically unfeasible for Delta Chat to fulfill any government’s data requests.

Highlighting the ongoing global governmental challenges against end-to-end encryption, a practice vital to safeguarding digital privacy, Delta Chat outlined its inability to comply with such demands on its Mastodon account.

They noted that this pressure is not unique to Russia, but is part of a broader international effort by various governments, including those in the EU, the US, and the UK, to weaken the pillars of digital security.

4
30

Lawmakers in New York have passed the Stop Addictive Feeds Exploitation (SAFE) for Kids Act and the Child Data Protection Act.

Assembly Bill A8148A and Senate Bill S7694A (that became the SAFE Act) were introduced as aiming to prevent social platforms from showing minors “addictive” (i.e., algorithmically manipulated) feeds, among a host of other provisions.

Parental consent is now required for children to have access to the latter versions of the feeds – which in turn means that the controversial age verification for adults must be introduced into the mix.

The new rules will not prohibit children from searching for particular keywords but social platforms will not be able to send notifications to their phones “regarding addictive feeds” from midnight to 6 am – again, this will be possible, but only with parental consent.

Could this be the true impetus behind the two bills – to usher in age verification and digital ID, some skeptics might wonder.

Regardless, Governor Kathy Hochul was in a celebratory mood late last week announcing the outcome that she pushed for, with the backing of some parent and student organizations. The Democrat is expected to sign the bills shortly.

The SAFE and Child Data Protection Acts are touted as proof that legislators in New York are not beholden to Big Tech – and the bills and their passage are being described as “historic and transformative,” likely because this is the first set of their kind in a US state.

How the laws get enforced and what positive, or negative (given the age verification factor) consequences they will have, will become evident in time. Meanwhile, heads of various legislative bodies, the state Attorney-General, and assembly members are congratulating each other and talking up the new legislation.

Age – and how to determine it (that is, uncontroversially, while protecting people’s right to privacy and anonymity) – crops up again with The New York Child Data Protection Act, which says data belonging to anyone under 18 cannot be collected, used, shared or sold – “unless they receive informed consent or unless doing so is strictly necessary for the purpose of the website.”

There’s room to breathe here for Big Tech, but their trade groups are still framing all this as a First Amendment violation issue.

5
25

The Internal Revenue Service (IRS) has come under fire for its decision to route Freedom of Information Act (FOIA) requests through a biometric identification system provided by ID.me. This arrangement requires users who wish to file requests online to undergo a digital identity verification process, which includes facial recognition technology.

Concerns have been raised about this method of identity verification, notably the privacy implications of handling sensitive biometric data. Although the IRS states that biometric data is deleted promptly—within 24 hours in cases of self-service and 30 days following video chat verifications—skeptics, including privacy advocates and some lawmakers, remain wary, particularly as they don’t believe people should have to subject themselves to such measures in the first place.

Criticism has particularly focused on the appropriateness of employing such technology for FOIA requests. Alex Howard, the director of the Digital Democracy Project, expressed significant reservations. He stated in an email to FedScoop, “While modernizing authentication systems for online portals is not inherently problematic, adding such a layer to exercising the right to request records under the FOIA is overreach at best and a violation of our fundamental human pure right to access information at worst, given the potential challenges doing so poses.”

Although it is still possible to submit FOIA requests through traditional methods like postal mail, fax, or in-person visits, and through the more neutral FOIA.gov, the IRS’s online system defaults to using ID.me, citing speed and efficiency.

An IRS spokesperson defended this method by highlighting that ID.me adheres to the National Institute of Standards and Technology (NIST) guidelines for credential authentication. They explained, “The sole purpose of ID.me is to act as a Credential Service Provider that authenticates a user interested in using the IRS FOIA Portal to submit a FOIA request and receive responsive documents. The data collected by ID.me has nothing to do with the processing of a FOIA request.”

Despite these assurances, the integration of ID.me’s system into the FOIA request process continues to stir controversy as the push for online digital ID verification is a growing and troubling trend for online access.

6
23
submitted 5 days ago* (last edited 5 days ago) by c0mmando@links.hackliberty.org to c/privacy@links.hackliberty.org

The European Union (EU) has managed to unite politicians, app makers, privacy advocates, and whistleblowers in opposition to the bloc’s proposed encryption-breaking new rules, known as “chat control” (officially, CSAM (child sexual abuse material) Regulation).

Thursday was slated as the day for member countries’ governments, via their EU Council ambassadors, to vote on the bill that mandates automated searches of private communications on the part of platforms, and “forced opt-ins” from users.

However, reports on Thursday afternoon quoted unnamed EU officials as saying that “the required qualified majority would just not be met” – and that the vote was therefore canceled.

This comes after several countries, including Germany, signaled they would either oppose or abstain during the vote. The gist of the opposition to the bill long in the making is that it seeks to undermine end-to-end encryption to allow the EU to carry out indiscriminate mass surveillance of all users.

The justification here is that such drastic new measures are necessary to detect and remove CSAM from the internet – but this argument is rejected by opponents as a smokescreen for finally breaking encryption, and exposing citizens in the EU to unprecedented surveillance while stripping them of the vital technology guaranteeing online safety.

Some squarely security and privacy-focused apps like Signal and Threema said ahead of the vote that was expected today they would withdraw from the EU market if they had to include client-side scanning, i.e., automated monitoring.

WhatsApp hasn’t gone quite so far (yet) but Will Cathcart, who heads the app over at Meta, didn’t mince his words in a post on X when he wrote that what the EU is proposing – breaks encryption.

“It’s surveillance and it’s a dangerous path to go down,” Cathcart posted.

European Parliament (EP) member Patrick Breyer, who has been a vocal critic of the proposed rules, and also involved in negotiating them on behalf of the EP, on Wednesday issued a statement warning Europeans that if “chat control” is adopted – they would lose access to common secure messengers.

“Do you really want Europe to become the world leader in bugging our smartphones and requiring blanket surveillance of the chats of millions of law-abiding Europeans? The European Parliament is convinced that this Orwellian approach will betray children and victims by inevitably failing in court,” he stated.

“We call for truly effective child protection by mandating security by design, proactive crawling to clean the web, and removal of illegal content, none of which is contained in the Belgium proposal governments will vote on tomorrow (Thursday),” Breyer added.

And who better to assess the danger of online surveillance than the man who revealed its extraordinary scale, Edward Snowden?

“EU apparatchiks aim to sneak a terrifying mass surveillance measure into law despite UNIVERSAL public opposition (no thinking person wants this) by INVENTING A NEW WORD for it – ‘upload moderation’ – and hoping no one learns what it means until it’s too late. Stop them, Europe!,” Snowden wrote on X.

It appears that, at least for the moment, Europe has.

7
22

The federal opposition in Australia is giving the government a run for its money when it comes to initiatives that in one form or another restrict online freedom of expression.

In addition to speech implications, the right to remain anonymous on the internet has long been supported by digital and civil rights advocates as fundamental for people’s privacy and security.

But now the age verification digital ID push in Australia is bringing the issue to the fore and has produced a parliamentary motion coming from opposition Liberals aimed at getting the government to implement a mechanism that enables the blocking of anonymous accounts.

This would be done as an addition to the age verification tools currently undergoing trials, where social media companies would collect 100 points of ID from their users – to unmask them.

During a House of Representatives debate, the government was also criticized as being beholden to Big Tech since it is (still) unwilling to make online ID verification mandatory – a bipartisan recommendation dating back to 2021.

Even the Australian government (and it’s the one with Michelle Rowland as Communications Minister, and eSafety Commissioner Julie Inman Grantwhose ID verification trial is presented as a way to prevent minors from accessing age-inappropriate content, seemed taken aback by the radical nature of the proposal to end anonymous posting on social platforms.

Dangerous to the privacy of all social media users, children included – is how MPs from the ruling Labor sought to dismiss the idea, presented by MP Andrew Wallace.

But Wallace thinks that the right to post anonymously is the source of pretty much all online evil: bullying, harassment, grooming, trafficking of children, creation of bot networks, and radicalizing, terrorizing, and “stealing from vulnerable Australians.”

Judging by reports citing Wallace, the MP is inordinately bothered by people being able to post on social sites without disclosing their government-issued IDs.

The way things stand, users are free to express themselves, and if the government doesn’t extend the verification scheme the way Wallace proposes – then how can users expect to have police knock on their door?

In his own words: “If you hide behind anonymity, you can say whatever you like without fear of being sued for defamation or having the police knock on your door. The identification of people who use social media accounts is as important as age verification.”

8
20

Big Brother might be always “watching you” – but guess what, five (pairs) of eyes sound better than one. Especially when you’re a group of countries out to do mass surveillance across different jurisdictions, and incidentally or not, name yourself by picking one from the “dystopian baby names” list.

But then again, those “eyes” might be so many and so ambitious in their surveillance bid, that they end up criss-crossed, not serving their citizens well at all.

And so, the Five Eyes, (US, Canada, Australia, New Zealand, UK) – an intelligence alliance, brought together by (former) colonial and language ties that bind – has been collecting no less than 100 times more biometric data – including demographics and other information concerning non-citizens – over the last 3 years, since about 2011.

That’s according to reports, which basically tell you – if you’re a Five Eye national or visit out of the UN’s remaining 188 member countries – expect to be under thorough, including biometric, surveillance.

The program is (perhaps misleadingly?) known as the “Migration 5,” (‘Known to One, Known to All” is reportedly the slogan. It sounds cringe, but also, given the promise of the Five Eyes – turns out, other than sounding embarrassing, it actually is.)

And at least as far as the news now surfacing about it, it was none other than “junior partner” New Zealand that gave momentum to reports about the situation. The overall idea is to keep a close, including a biometric, eye on the cross-border movement within the Five Eye member countries.

How that works for the US, with its own liberal immigration policy, is anybody’s guess at this point. But it does seem like legitimate travelers, with legitimate citizenship outside – and even inside – the “Five Eyes” might get caught up in this particular net the most.

“Day after day, people lined up at the United States Consulate, anxiously waiting, clutching the myriad documents they need to work or study in America,” a report from New Zealand said.

“They’ve sent in their applications, given up their personal details, their social media handles, their photos, and evidence of their reason for visiting. They press their fingerprints on to a machine to be digitally recorded.”

The overall “data hunger” between the 5 of these post WW2 – now “criss-crossed” – eyes has been described as rising to 8 million biometric checks over the past years.

“The UK now says it may reach the point where it checks everyone it can with its Migration 5 partners,” says one report.

9
16

OpenAI has expanded its leadership team by welcoming Paul M. Nakasone, a retired US Army general and former director of the National Security Agency, as its latest board member.

The organization highlighted Nakasone’s role on its blog, stating, “Mr. Nakasone’s insights will also contribute to OpenAI’s efforts to better understand how AI can be used to strengthen cybersecurity by quickly detecting and responding to cybersecurity threats.”

The inclusion of Nakasone on OpenAI’s board is a decision that warrants a critical examination and will likely raise eyebrows. Nakasone’s extensive background in cybersecurity, including his leadership roles in the US Cyber Command and the Central Security Service, undoubtedly brings a wealth of experience and expertise to OpenAI. However, his association with the NSA, an agency often scrutinized for its surveillance practices and controversial data collection methods, raises important questions about the implications of such an appointment as the company’s product ChatGPT is, through a deal with Apple, about to be available on every iPhone. The company is also already tightly integrated into Microsoft software.

Firstly, while Nakasone’s cybersecurity acumen is an asset, it also introduces potential concerns about privacy and the ethical use of AI. The NSA’s history of mass surveillance, highlighted by the revelations of Edward Snowden, has left a lasting impression on the public’s perception of government involvement in data security and privacy.

By aligning itself with a figure so closely associated with the NSA, OpenAI might raise concerns about a shift towards a more surveillance-oriented approach to cybersecurity, which could be at odds with the broader tech community’s push for greater transparency and ethical standards in AI development.

Secondly, Nakasone’s appointment could raise doubts about the direction of OpenAI’s policies and practices, particularly those related to cybersecurity and data handling.

Nakasone’s role on the newly established Safety and Security Committee, which will conduct a 90-day review of OpenAI’s processes and safeguards, places him in a position of significant influence. This committee’s recommendations will likely shape OpenAI’s future policies, potentially steering the company towards practices that reflect Nakasone’s NSA-influenced perspective on cybersecurity.

Sam Altman, the CEO of OpenAI, has become a controversial figure in the tech industry, not least due to his involvement in the development and promotion of eyeball-scanning digital ID technology. This technology, primarily associated with Worldcoin, a cryptocurrency project co-founded by Altman, has sparked significant debate and criticism for several reasons.

The core concept of eyeball scanning technology is inherently invasive. Worldcoin’s approach involves using a device called the Orb to scan individuals’ irises to create a unique digital identifier.

10
12

In the UK, a series of AI trials involving thousands of train passengers who were unwittingly subjected to emotion-detecting software raises profound privacy concerns. The technology, developed by Amazon and employed at various major train stations including London’s Euston and Waterloo, as well as Manchester Piccadilly, used artificial intelligence to scan faces and assess emotional states along with age and gender. Documents obtained by the civil liberties group Big Brother Watch through a freedom of information request unveiled these practices, which might soon influence advertising strategies.

Over the last two years, these trials, managed by Network Rail, implemented “smart” CCTV technology and older cameras linked to cloud-based systems to monitor a range of activities. These included detecting trespassing on train tracks, managing crowd sizes on platforms, and identifying antisocial behaviors such as shouting or smoking. The trials even monitored potential bike theft and other safety-related incidents.

The data derived from these systems could be utilized to enhance advertising revenues by gauging passenger satisfaction through their emotional states, captured when individuals crossed virtual tripwires near ticket barriers. Despite the extensive use of these technologies, the efficacy and ethical implications of emotion recognition are hotly debated. Critics, including AI researchers, argue the technology is unreliable and have called for its prohibition, supported by warnings from the UK’s data regulator, the Information Commissioner’s Office, about the immaturity of emotion analysis technologies.

According to Wired, Gregory Butler, CEO of Purple Transform, has mentioned discontinuing the emotion detection capability during the trials and affirmed that no images were stored while the system was active. Meanwhile, Network Rail has maintained that its surveillance efforts are in line with legal standards and are crucial for maintaining safety across the rail network. Yet, documents suggest that the accuracy and application of emotion analysis in real settings remain unvalidated, as noted in several reports from the stations.

Privacy advocates are particularly alarmed by the opaque nature and the potential for overreach in the use of AI in public spaces. Jake Hurfurt from Big Brother Watch has expressed significant concerns about the normalization of such invasive surveillance without adequate public discourse or oversight.

Jake Hurfurt, Head of Research & Investigations at Big Brother Watch, said: “Network Rail had no right to deploy discredited emotion recognition technology against unwitting commuters at some of Britain’s biggest stations, and I have submitted a complaint to the Information Commissioner about this trial.

“It is alarming that as a public body it decided to roll out a large scale trial of Amazon-made AI surveillance in several stations with no public awareness, especially when Network Rail mixed safety tech in with pseudoscientific tools and suggested the data could be given to advertisers.’

“Technology can have a role to play in making the railways safer, but there needs to be a robust public debate about the necessity and proportionality of tools used.

“AI-powered surveillance could put all our privacy at risk, especially if misused, and Network Rail’s disregard of those concerns shows a contempt for our rights.”

11
12

Firefox users in Russia can once again install several anti-censorship and pro-privacy extensions, after Mozilla told Reclaim The Net it has reversed its decision to block these add-ons. Previously, developers and users had reported that the extensions were unavailable, suspecting Mozilla, the developer of Firefox, was behind the block.

The extensions in question—Censor Tracker, Runet Censorship Bypass, Planet VPN, and FastProxy—had become unavailable in the Russian market. Initially, it was unclear whether Mozilla made the decision independently or in response to an order from authorities.

One developer from the team behind Censor Tracker had confirmed that the add-on had recently become unavailable in Russia but stated they were unsure why.

Comments on the developer’s post speculated that the decision might have been Mozilla’s.

Russian users attempting to install the add-ons were met with the message, “unavailable in your region,” while these extensions remained accessible in other regions, including the US.

The initial decision and subsequent reversal have sparked discussions within the Firefox community about Mozilla’s guiding principles and their application in today’s regulatory environment.

Nonetheless, the reinstatement of these tools has been welcomed by those who continue to use Firefox for its dedication to privacy.

In a statement to Reclaim The Net, Mozilla announced that it was reversing its decision to block the tools.

“In alignment with our commitment to an open and accessible internet, Mozilla will reinstate previously restricted listings in Russia. Our initial decision to temporarily restrict these listings was made while we considered the regulatory environment in Russia and the potential risk to our community and staff,” the Mozilla spokesperson said. “Mozilla’s core principles emphasize the importance of an internet that is a global public resource, open and accessible to all. Users should be free to customize and enhance their online experience through add-ons without undue restrictions.”

12
11

A group associated with big (and smaller) tech companies has filed a lawsuit claiming First Amendment violations against the state of Mississippi.

This comes after long years of these companies scoffing at First Amendment speech protections, as they censored their users’ speech and/or deplatformed them.

We obtained a copy of the lawsuit for you here.

It might seem hypocritical, but at the same time, even a broken clock is right twice a day. In this case, it is the industry group NetChoice that has launched the legal battle (NetChoice v. Fitch), at the center of which is state bill HB 1126 which requires age verification to be implemented on social networks.

NetChoice correctly observes that forcing people (for the sake of providing parental consent) to essentially unmask themselves through age verification (“age assurance”) exposes sensitive personal data, undermines their constitutional rights, and poses a threat to the online security of all internet users.

The filing against Mississippi also asserts that it is up to parents – rather than what NetChoice calls “Big Government” – to, in different ways, assure that their children are using sites and online services in an age-appropriate manner.

HB 1126 is therefore asserted to represent “an unconstitutional overreach,” and if passed, the industry group said, “may result in the censorship of vast amounts of speech online.”

Age verification is a controversial subject almost everywhere it crops up around the world, particularly in those countries that consider themselves democracies.

Another state, Indiana, is being sued on similar grounds – violation of constitutional protections – and for similar reasons, namely, the age verification push.

This time, it’s not done in the name of Big Tech, but by what some reports choose to dub “Big Porn.” Indiana State Attorney General Todd Rokita is named as a defendant in this lawsuit, brought by major porn sites, industry associations, and marketing and production companies.

And while those behind the state law which is about to come into force next month claim it is there to protect minors from adult content (via age verification), the plaintiffs allege that the law breaks not only the First, but also the Fifth, Eighth and 14th Amendments of the US Constitution – and the Communications Decency Act (CDA).

13
9

These days, as the saying goes – you can’t swing a cat without hitting a “paper of record” giving prominent op-ed space to some current US administration official – and this is happening very close to the presidential election.

This time, the New York Times and US Surgeon General Vivek Murthy got together, with Murthy’s own slant on what opponents might see as another push to muzzle social media ahead of the November vote, under any pretext.

A pretext is, as per Murthy: new legislation that would “shield young people from online harassment, abuse and exploitation,” and there’s disinformation and such, of course.

Coming from Murthy, this is inevitably branded as “health disinformation.” But the way digital rights group EFF sees it – requiring “a surgeon general’s warning label on social media platforms, stating that social media is associated with significant mental health harms for adolescents” – is just unconstitutional.

Whenever minors are mentioned in this context, the obvious question is – how do platforms know somebody’s a minor? And that’s where the privacy and security nightmare known as age verification, or “assurance” comes in.

Critics think this is no more than a thinly veiled campaign to unmask internet users under what the authorities believe is the platitude that cannot be argued against – “thinking of the children.”

Yet in reality, while it can harm children, the overall target is everybody else. Basically – in a just and open internet, every adult who might think using this digital town square, and expressing an opinion, would not have to come with them producing a government-issued photo ID.

And, “nevermind” the fact that the same type of “advisory” is what is currently before the Supreme Court in the Murthy v. Missouri case, deliberating whether what no less than the First Amendment was violated in the alleged – prior – censorship collusion between the government and the Big Tech.

The White House is at this stage cautious to openly endorse the points Murthy made in the NYC think-piece, with a spokesperson, Karine Jean-Pierre, “neither confirming nor denying” anything.

“So I think that’s important that he’ll continue to do that work” – was the “nothing burger” of a reply Jean-Pierre offered when asked about the idea of “Murthy labels.”

But Murthy is – and really, the whole gang around the current administration and legacy media bending their way – now seems to be in the going for broke mode ahead of November.

14
9

SimpleX – an end-to-end encrypted messaging app that its founder touts as the first and possibly the only one that operates without any identifiers – has rolled out a new version.

The messaging and application platform, which those developing it say doesn’t even use random numbers or cryptographic keys to identify user profiles (in addition to not requiring phone numbers and usernames) has several new features in version 5.8 that was released earlier this month.

Related: No Phone Numbers. No Usernames. A Possible Game-Changer For Private Messaging.

The focus of the upgrade has been to enhance the product by tackling one segment of user privacy protection that had not been properly addressed, despite the effort to eschew various kinds of identifiers.

That segment has to do with message routing and IP addresses. In different scenarios, recipients were able to see and track IP addresses of senders – a major drawback and criticism received by this privacy and security-focused app.

Using a VPN or Tor was a transport overlay network workaround, but SimpleX developers opted out of now embedding Tor in the app, despite the many similarities they share through incorporating various methods of IP address protection.

The reasons for this decision are primarily some of the habitually weak points of Tor, such as latency, the resources it uses, and error rates. On top of that, there are jurisdictions around the world that ban or restrict the use of this particular overlay network.

Importantly, perhaps, as the post notes, Tor “doesn’t solve the problem of meta-data correlation by user’s transport session” – and working around this problem requires even more resources.

For that reason, SimpleX, while announcing plans to continue to support Tor and other overlay networks, opted for a new private message routing protocol that “provides IP address and transport session protection out of the box.”

According to SimpleX, while building on the Tor design, this new method means that the forwarding relay is always chosen by the sender and the second by the recipient.

“In this way, neither side of the conversation can observe IP address or transport session of another,” the post explained.

Another advantage of the new protocol is the forwarding relay preventing man-in-the-middle attacks, via cryptographic signing that allows the client to “verify that the messages are sent to the intended destination, and not intercepted.”

Other new features include customizable Android and desktop themes and more group options that allow sending images, files, and media, “and also SimpleX links only to group administrators and owners.”

15
8

Big Tech coalition Digital Trust & Safety Partnership (DTSP), the UK’s regulator OFCOM, and the World Economic Forum (WEF) have come together to produce a report.

The three entities, each in their own way, are known for advocating for or carrying out speech restrictions and policies that can result in undermining privacy and security.

DTSP says it is there to “address harmful content” and makes sure online age verification (“age assurance”) is enforced, while OFCOM states its mission to be establishing “online safety.”

Now they have co-authored a WEF (WEF Global Coalition for Digital Safety) report – a white paper – that puts forward the idea of closer cooperation with law enforcement in order to more effectively “measure” what they consider to be online digital safety and reduce what they identify to be risks.

The importance of this is explained by the need to properly allocate funds and ensure compliance with regulations. Yet again, “balancing” this with privacy and transparency concerns is mentioned several times in the report almost as a throwaway platitude.

The report also proposes co-opting (even more) research institutions for the sake of monitoring data – as the document puts it, a “wide range of data sources.”

More proposals made in the paper would grant other entities access to this data, and there is a drive to develop and implement “targeted interventions.”

Under the “Impact Metrics” section, the paper states that these are necessary to turn “subjective user experiences into tangible, quantifiable data,” which is then supposed to allow for measuring “actual harm or positive impacts.”

To get there the proposal is to collaborate with experts as a way to understand “the experience of harm” – and that includes law enforcement and “independent” research groups, as well as advocacy groups for survivors.

Those, as well as law enforcement, are supposed to be engaged when “situations involving severe adverse effect and significant harm” are observed.

Meanwhile, the paper proposes collecting a wide range of data for the sake of performing these “measurements” – from platforms, researchers, and (no doubt select) civil society entities.

The report goes on to say it is crucial to make sure to find out best ways of collecting targeted data, “while avoiding privacy issues” (but doesn’t say how).

The resulting targeted interventions should be “harmonized globally.”

As for who should have access to this data, the paper states:

“Streamlining processes for data access and promoting partnerships between researchers and data custodians in a privacy-protecting way can enhance data availability for research purposes, leading to more robust and evidence-based approaches to measuring and addressing digital safety issues.”

16
6

Bad rules are only made better if they are also opt-in (that is, a user is not automatically included, but has to explicitly consent to them).

But the European Union (EU) looks like it’s “reinventing” the meaning and purpose of an opt-in: when it comes to its child sexual abuse regulation, CSAR, a vote is coming up that would block users who refuse to opt-in from sending photos, videos, and links.

According to a leak of minutes just published by the German site Netzpolitik, the vote on what opponents call “chat control” – and lambast as really a set of mass surveillance rules masquerading as a way to improve children’s safety online – is set to take place as soon as June 19.

That is apparently much sooner than those keeping a close eye on the process of adoption of the regulation would have expected.

Due to its nature, the EU is habitually a slow-moving, gargantuan bureaucracy, but it seems that when it comes to pushing censorship and mass surveillance, the bloc finds a way to expedite things.

Netzpolitik’s reporting suggests that the EU’s centralized Brussels institutions are succeeding in getting all their ducks in a row, i.e., breaking not only encryption (via “chat control”) – but also resistance from some member countries, like France.

The minutes from the meeting dedicated to the current version of the draft state that France is now “significantly more positive” where “chat-control is concerned.”

Others, like Poland, would still like to see the final regulation “limited to suspicious users only, and expressed concerns about the consent model,” says Netzpolitik.

But it seems the vote on a Belgian proposal, presented as a “compromise,” is now expected to happen much sooner than previously thought.

The CSAR proposal’s “chat control” segment mandates accessing encrypted communications as the authorities look for what may qualify as content related to child abuse.

The strong criticism of such a rule stems not only from the danger of undermining encryption but also the inaccuracy and ultimate inefficiency regarding the stated goal – just as innocent people’s privacy is seriously jeopardized.

And there’s the legal angle, too: the EU’s own legal service last year “described chat control as illegal and warned that courts could overturn the planned law,” the report notes.

17
4

To accelerate its central bank digital currency (CBDC) development, Israel is pushing forward with the digital shekel initiative. The Bank of Israel (BoI) is set to collaborate with a range of service providers to create a sophisticated digital payment system based on this new currency.

Central Bank Digital Currencies have sparked significant controversy, particularly concerning privacy and civil liberties. One of the primary concerns is the potential for increased surveillance. Unlike cash transactions, which offer a high degree of anonymity, CBDC transactions could be meticulously tracked and monitored by central banks. This capability to log and trace every transaction made with CBDCs could severely undermine financial privacy, allowing governments to gather extensive data on individuals’ spending habits and personal financial activities.

**Related: Fed Governor Admits CBDCs Pose “Significant” Privacy Risks ** Moreover, the enhanced government control over the money supply that CBDCs could provide raises further issues. With CBDCs, authorities might more easily freeze or seize assets without due process, potentially misusing this power to target political opponents or suppress dissent. The concept of programmable money, where the government could dictate how, where, and when money can be spent, also poses a risk. While this could be utilized for beneficial purposes such as directing stimulus funds, it also opens the door to excessive control over individual financial behavior.

Israel’s central bank outlined its plans in an announcement, revealing the launch of the “Digital Shekel Challenge.” This initiative, inspired by the Bank for International Settlements (BIS) Innovation Hub’s “Project Rosalind,” aims to explore advanced API prototypes. The BIS project, conducted in partnership with the Bank of England, serves as a model for this Israeli endeavor.

Within the framework of the challenge, the BoI will offer a sandbox environment equipped with an API layer. Participants will compete to develop real-time CBDC payment solutions designed for widespread public use.

Related: Biden Signals Plan To Destroy Financial Anonymity With CBDCs

Shauli Rejwan, managing partner at Masterkey Venture Capital in Tel Aviv, shed light on the program’s structure in an interview. He described the challenge as a three-phase process: initial applications and presentations, subsequent access to the new network for selected projects, and a final presentation to a panel of judges, many of whom are prominent figures in the crypto community.

“This initiative is a significant step for the Israeli ecosystem, potentially bridging the gap between the web3 industry and government, even though DeFi, ZK, and permissionless solutions are not yet being considered,” said Rejwan.

Invitations for participation have been extended to entities from the private sector, public institutions, and academic circles. The central bank emphasized a preference for innovative and original uses within the payments domain, whether these are enhancements to existing systems or entirely new applications.

The initiative also allows participants to tailor their solutions to specific niches and unique scenarios, despite the universal applicability of CBDCs.

Critics also worry about the implications of CBDCs on financial inclusion and freedom. While proponents argue that CBDCs could help provide banking access to the unbanked, the same technology could be exploited to exclude or discriminate against certain groups. This could lead to situations where access to financial services is restricted based on compliance with government policies, thus eroding personal freedoms and potentially integrating into social credit systems where financial privileges are tied to behavior.

18
3

The Gates Foundation continues to bankroll various initiatives around the world aimed at introducing digital ID and payments by the end of this decade.

The scheme is known as the digital public infrastructure (DPI), and those pushing it include private or informal groups like the said foundation and the World Economic Forum (WEF), but also the US, the EU, and the UN.

And now, the UK-based AI and data science research group Alan Turing Institute has become the recipient of a renewed grant, this time amounting to $4 million, given by the Gates Foundation.

This has been announced as initial funding for the Institute’s initiative to ensure “responsible” implementation of ID services.

The Turing Institute is presenting its work that will be financed by the grant over the next three years as a multi-disciplinary project focused on positive issues, such as ensuring that launching DPI elements (like digital ID) is done with privacy and security concerns properly addressed.

But – given the past and multi-year activities of the Gates Foundation, nobody should be blamed for interpreting this as an attempt to actually whitewash these key issues – namely privacy and security – that opponents of centralizing people’s identities through digital ID schemes consistently warn about.

In announcing the renewed grant, the Turing Institute made it clear that it considers implementing “ID services” a positive direction, which according to the organization improves anything from inclusion, access to services and to human rights.

But apparently, some “tweaking” around privacy and security (or at least “enhancing” the perception of how they are handled in digital ID programs) – is needed. Hence, perhaps, the new initiative.

“The project aims to enhance the privacy and security of national digital identity systems, with the ultimate goal to maximize the value to beneficiaries, whilst limiting known and unknown risks to these constituents and maintaining the integrity of the overall system,” the Institute said.

Related: The 2024 Digital ID and Online Age Verification Agenda

A lot of big words, and positive sentiment – but, in simpler words, what the statement amounts to is a promise to somehow “auto-magically” cover all the bases. That is – at once secure the benefits while obliterating the negatives. (Maybe the Institute has a spare bridge to sell, too /s)

The worry here is that this could be yet another Gates Foundation PR blitz aimed at improving the image of the mistrusted, by rights-minded people, “DPI” push – a distrust that in no insignificant part stems from not trusting its biggest proponents as having any genuinely noble intentions to begin with.

An interesting piece of information that we do learn from the announcement is that every year, “billions of dollars are being invested to develop more secure, scalable, and user-friendly identity (digital ID) systems.”

19
23
submitted 1 week ago* (last edited 1 week ago) by freedomPusher@sopuli.xyz to c/privacy@links.hackliberty.org

cross-posted from: https://sopuli.xyz/post/14006758

Yikes.

“In the adequacy decision, the European Commission estimated that the U.S. ensures a level of protection for personal data transferred from the EU to U.S companies under the new framework that is essentially equivalent to the level of protection within the European Union.” (emphasis added)

Does the EU disregard the Snowden revelations?

And what a missed opportunity. California state specifically has some kind of GDPR analogue, so it might be reasonable if CA specifically were to satisfy an adequacy decision, (still a stretch) but certainly not the rest of the country. Such a move could have motivated more US states to do the necessary.

I must say I’ve lost some confidence and respect for the #GDPR.

20
0
21
258
You have ZERO financial privacy (links.hackliberty.org)
submitted 1 week ago* (last edited 1 week ago) by c0mmando@links.hackliberty.org to c/privacy@links.hackliberty.org

and after casually admitting to dragnet mass surveillance, they had the audacity to later force a redaction. see below:

22
-4

Protonmail suspends user account on behalf of National Defence Radio Establishment.

It has happen before https://thehackernews.com/2021/09/protonmail-shares-activists-ip-address.html

And it happen again, this time the National Defence Radio Establishment have conducted illegal surveillance which they reported in to Protonmail as spam to get the users account suspended, even the accounts that no mail has sent from.

23
12
Hack Liberty's Lemmy Instance is now available over Tor! (snb3ufnp67uudsu25epj43schrerbk7o5qlisr7ph6a3wiez7vxfjxqd.onion)

Seems to be working well! Any feedback is appreciated. http://snb3ufnp67uudsu25epj43schrerbk7o5qlisr7ph6a3wiez7vxfjxqd.onion

24
5

Hello all,

Just wondering if there are any projects involving lemmy and .onion

I searched and didn’t see anything but I figured I’d ask

If not is there a reason this isn’t possible? Or has nobody cared to do it yet?

When I have to visit r****t I use a libreddit hidden service, and there are quite a few to choose from. Am I correct to think a similar mirror should be about as easy to implement for Lemmy?

an onion only instance where it never touches the clearnet would be really cool too but it would probably be a ghost town (sadly).

Love to hear your thoughts

Thanks

25
5
submitted 2 weeks ago* (last edited 2 weeks ago) by freedomPusher@sopuli.xyz to c/privacy@links.hackliberty.org

A national central bank that keeps track of bank accounts, credit records, delinquency, etc for everyone in the country has their website on Cloudflare. People are instructed to check their credit records on that site.

The question is: suppose you don’t use the site. Suppose you only request your records offline. What are the chances that Cloudflare handles your sensitive records?

I guess this might be hard to answer. I assume it comes down to whether to central bank itself uses their own website to print records to satisfy an offline request. And I assume it’s also a question of whether the commercial banks use the website of the central bank to feed it. Correct?

view more: next ›

Privacy

1 readers
1 users here now

Privacy is the ability for an individual or group to seclude themselves or information about themselves, and thereby express themselves selectively.

Rules

  1. Don't do unto others what you don't want done unto you.
  2. No Porn, Gore, or NSFW content. Instant Ban.
  3. No Spamming, Trolling or Unsolicited Ads. Instant Ban.
  4. Stay on topic in a community. Please reach out to an admin to create a new community.

founded 1 year ago
MODERATORS