26
5
submitted 2 weeks ago* (last edited 2 weeks ago) by freedomPusher@sopuli.xyz to c/privacy@links.hackliberty.org

A national central bank that keeps track of bank accounts, credit records, delinquency, etc for everyone in the country has their website on Cloudflare. People are instructed to check their credit records on that site.

The question is: suppose you don’t use the site. Suppose you only request your records offline. What are the chances that Cloudflare handles your sensitive records?

I guess this might be hard to answer. I assume it comes down to whether to central bank itself uses their own website to print records to satisfy an offline request. And I assume it’s also a question of whether the commercial banks use the website of the central bank to feed it. Correct?

27
11
28
29

Costco, which ranks among the three largest retailers in the US (and globally), is joining the club of big corporations with access to massive amounts of customer data, who are launching various schemes to monetize that data.

Costco’s plan, still in the testing phase, is to develop a way to target ads at customers both on its website and in stores. The numbers are staggering and draw from the retailer’s loyalty program: the shopping behavior of people in 75.5 million households. And – a membership card is a requirement for anyone shopping at Costco.

The promise of access to shopping habits inevitably includes past purchases and the corporation’s Assistant Vice President of Retail Media Mark Williamson also promised future partners that they will be able to not only target ads (“reach”) members of the Costco loyalty program, but also, “reach the right members in the right context based on past behavior.”

Costco’s effort to keep up with the trends in both retail and ad industries is evident in the fact that Williamson’s job was only created last September.

The race to start using customer data for advertising is rapidly spreading beyond Big Tech and to the likes of Walmart, and Target, but also mega banks, financial institutions, and hotel chains.

Some observers believe these new players in the market have realized they can fill some gaps in the lucrative business created by regulatory efforts, which aim to protect personal data in those places where it has traditionally been harvested and monetized until now – on Big Tech platforms and services.

Playing up the benefit that creating the ad network will supposedly have for customers, Williamson is quoted as saying that the huge profit the giant looks likely to generate in this way will be “reinvested to keeping prices low.”

What is certain to have a low price tag attached to it is the cost of retailers like Costco getting into the advertising business. It’s cheap to run ads, and customers will keep returning whether the ads are there or not, reports note.

29
10

The federal opposition in Australia is giving the government a run for its money when it comes to initiatives that in one form or another restrict online freedom of expression.

In addition to speech implications, the right to remain anonymous on the internet has long been supported by digital and civil rights advocates as fundamental for people’s privacy and security.

But now the age verification digital ID push in Australia is bringing the issue to the fore and has produced a parliamentary motion coming from opposition Liberals aimed at getting the government to implement a mechanism that enables the blocking of anonymous accounts.

This would be done as an addition to the age verification tools currently undergoing trials, where social media companies would collect 100 points of ID from their users – to unmask them.

During a House of Representatives debate, the government was also criticized as being beholden to Big Tech since it is (still) unwilling to make online ID verification mandatory – a bipartisan recommendation dating back to 2021.

Even the Australian government (and it’s the one with Michelle Rowland as Communications Minister, and eSafety Commissioner Julie Inman Grant) whose ID verification trial is presented as a way to prevent minors from accessing age-inappropriate content, seemed taken aback by the radical nature of the proposal to end anonymous posting on social platforms.

Dangerous to the privacy of all social media users, children included – is how MPs from the ruling Labor sought to dismiss the idea, presented by MP Andrew Wallace.

But Wallace thinks that the right to post anonymously is the source of pretty much all online evil: bullying, harassment, grooming, trafficking of children, creation of bot networks, and radicalizing, terrorizing, and “stealing from vulnerable Australians.”

Judging by reports citing Wallace, the MP is inordinately bothered by people being able to post on social sites without disclosing their government-issued IDs.

The way things stand, users are free to express themselves, and if the government doesn’t extend the verification scheme the way Wallace proposes – then how can users expect to have police knock on their door?

In his own words: “If you hide behind anonymity, you can say whatever you like without fear of being sued for defamation or having the police knock on your door. The identification of people who use social media accounts is as important as age verification.”

30
25

The European Union (EU) is planning to implement a new set of draconian mass surveillance rules shortly after Sunday’s EU Parliament election, a member of the EP has warned after the plans surfaced on the internet.

The conclusion that radical surveillance measures are in the works proceeds from documents detailing the meetings of working groups, dubbed “high level group(s) on access to data for effective law enforcement.”

The documents originate from the EU Commission, and contain a number of recommendations, including reintroducing indiscriminate retention of communications data in the bloc, creation of encryption backdoors, as well as forcing hardware manufacturers to give access to anything from phones to cars to law enforcement through what is known as “access by design.”

MEP Patrick Breyer announced that the plan contains 42 points produced by the EU Commission and governments of member-countries. The purpose of being able to access phones, IoT (such as “smart home”) devices, and cars is to make sure they can be monitored around the clock.

Meanwhile, the return of controversial data retention is planned despite a previous ruling of the EU Court of Justice, and could even be extended to include over-the-top services such as messengers (this is defined as retaining IP information data “at the very least”). That, Breyer explains, means that all internet activities will become trackable.

A favorite target of authorities actively undermining their image as democracies has for a while been end-to-end encryption. Here, the EU intends to ban secure encryption of metadata and subscriber data, as well as force messaging services who implement encryption to allow interception.

The EU further plans to “tackle” the use of encryption devices that it declares are “proven to be used solely” by criminals. In reality, the right to install encryption backdoors in phones and computers can be abused to spy on anyone, dissidents and critics included.

Technology providers will, if so ordered by judicial authorities, have to break encryption in order to “facilitate access to data at rest in user’s devices.” And there will be “mechanisms for robust cooperation with communication and technology providers” – meaning they will have to share data with governments and law enforcement.

If these agencies demand, service providers must activate GPS location tracking, according to these recommended “solutions for effective law enforcement.” Representatives of providers who refuse could end up in jail.

“This extreme surveillance plan must not become a reality, if only because it has been cooked up by a completely one-sided secret group of surveillance fanatics working without any real transparency or democratic legitimacy,” Breyer stated.

31
48

Anyone with even some cursory knowledge of how user interface/experience (UX) but also user behavior works will tell you that “default (settings) is king.”

That is why the ability to “opt-out” (that is, remove yourself from a feature baked in as default) although it seems fair enough at first glance, is actually often a deliberate choice to harm users’ interests, by banking on their inertia.

But then when even that option is degraded to what some refer as “nearly impossible to opt out” – things start looking really bad.

They get even worse when you learn that this concerns your personal data being used to train AI models. And by none other than that paragon of habitual disrespect for user privacy and security, Meta.

This is what Instagram and Facebook users in the EU (and the UK) are being told about the whole “operation”: “We’re getting ready to expand our AI at Meta experiences to your region.”

Screenshot from Meta

But “your region” – i.e., the UK and the 27-member bloc, have some fairly strict legislation in place, at least formally, to protect privacy and security of online data. And so the Meta message sent to them continued:

“To help bring these experiences to you, we’ll now rely on the legal basis called legitimate interests for using your information to develop and improve AI at Meta. This means that you have the right to object to how your information is used for these purposes. If your objection is honored, it will be applied from then on.”

“If your objection is honored” – might just be the most bizarre example of corporations trying to get “opt-out” to mean squat.

Outrageous as this looks, that’s by no means the end or the worst of the story: if you’re not in the UK or the EU – you won’t even get the “courtesy” of this notification that’s wrong to begin with, in so many ways.

Screenshot from Meta

That’s because the EU has legislation supposed to strongly protect privacy, GDPR, and the UK, after leaving the bloc, enacted much the same rules.

In other words, elsewhere in the world, you don’t even know what new ways Meta has found to, eventually and effectively, monetize your data without your knowledge.

And now for the real point of the story: how will the EU (and the UK) react to all this?

Always quick to turn the screw on Meta in a bid to make it expand its censorship – will these governments even react to the giant making a mockery of their data protection laws by adding a gazillion hurdles to the “opt out” process?

32
37

A heavily criticized federal ID program, Real ID, is set to become mandatory for domestic air travel in the US on May 7, 2025.

Once in effect, the federal scheme will ban adults from domestic flights unless they have replaced “traditional” state-issued IDs with Real ID. And the mandate will also extend to citizens’ ability to access some federal facilities.

The origin of the legislation goes back to 2005 and the REAL ID Act, which was explained by the Department of Homeland Security as standardization of the issuance of ID cards, driver’s licenses, and similar forms of identification.

Starting in the spring of 2025, travelers will no longer be able to use “traditional” driver’s licenses, although passports are an option.

That means that for adults in the US without a passport (and according to the State Department, only less than half have it), getting REAL ID or Enhanced ID will be the only options – but the latter will still not be accepted in domestic air travel, only for crossing sea or land border with Mexico and Canada.

Rights groups like EFF sum the situation up as the US government forcing states “to turn your driver’s license into a national ID” – with dire consequences for privacy, on top of monetary costs.

And, according to EFF, the declarative goal – improving national security – will not be achieved at all.

This organization compares Real ID and the single national database housing people’s data to the creeping undermining of privacy and expansion of surveillance that happened in comparable past scenarios.

“Remember the Social Security number started innocuously enough but it has become a prerequisite for a host of government services and been co-opted by private companies to create massive databases of personal information,” EFF writes, urging state legislators to “resist” implementing REAL ID.

The ACLU expressed similar concerns regarding surveillance and privacy, but also financial and administrative burdens that come with the scheme, announcing that it has joined those states that are opposed to the law and are seeking to get it repealed.

“By definitively turning driver’s licenses into a form of national identity documents, REAL ID would have a tremendously destructive impact on privacy,” ACLU said on its website, noting that these concerns have held back full adoption in many states.

33
16

Frances Haugen is “thinking of the children” – but also, of end-to-end encryption.

Some other whistleblowers whose actions benefited societies across the world have ended up in prison or exile.

Meanwhile, censorship-supporting Haugen, known as the Facebook “whistleblower,” is out there publishing memoirs, investing in cryptocurrency – and most recently, presenting proposals to none other than Meta’s shareholders.

Together with anti-encryption group Heat Initiative and Proxy Impact (specializing in shareholder proxy vote campaign consulting for left-leaning organizations), and several other groups, Haugen addressed the Meta annual shareholder’s meeting.

They want Meta’s Board to accept a proposal for the company to produce a child safety report each year, which would demonstrate whether Meta is “adequately reducing harm to children on its platforms.”

The proposal’s most notable feature is an attack on end-to-end encryption, with Haugen naming specifically this (and only this) technology in relation to child abuse threats.

To drive her point home, Haugen cited the EU launching an investigation into Meta for its suspected failure to “reduce physical and mental health risks to young users.”

The proposal (“Proposal #11) refers to more transparency regarding the disclosure of Meta’s business metrics that she said do not cover children’s own reporting about how they “experience safety” on Meta’s platforms.

The addition of such reporting would add to Meta gaining new and keeping existing users, and be well received by advertisers, legislators, etc., Haugen told the shareholders, playing on their interest in anything that might affect the company’s bottom line.

But the only specific harm Haugen names is encryption, and backed up that claim by citing law enforcement – notoriously interested in simplifying and expediting mass surveillance by undermining security, i.e., encryption on the internet – as well as “child safety experts.”

And they, Haugen said, agree that “Meta’s expanding end-to-end encryption without new safety features will hide millions of incidents of child sexual abuse.”

It is an unfortunate fact of politics and politicking that “child safety” is a term too often used to cover up the urge to introduce even more online censorship.

Haugen’s reference to “new safety features” that should be added to encryption (in itself, the best bet that anyone at this time has on safety on the internet), can be taken as a call to introduce backdoors that would weaken this crucial technology.

34
1
35
37

cross-posted from: https://lemmy.dbzer0.com/post/21787602

This is a good example of how copyright’s continuing obsession with ownership and control of digital material is warping the entire legal system in the EU. What was supposed to be simply a fair way of rewarding creators has resulted in a monstrous system of routine government surveillance carried out on hundreds of millions of innocent people just in case they copy a digital file.

36
489
37
54

Kate Robertson is a senior research associate and Ron Deibert is director at the University of Toronto’s Citizen Lab.

A federal cybersecurity bill, slated to advance through Parliament soon, contains secretive, encryption-breaking powers that the government has been loath to talk about. And they threaten the online security of everyone in Canada.

Bill C-26 empowers government officials to secretly order telecommunications companies to install backdoors inside encrypted elements in Canada’s networks. This could include requiring telcos to alter the 5G encryption standards that protect mobile communications to facilitate government surveillance.

The government’s decision to push the proposed law forward without amending it to remove this encryption-breaking capability has set off alarm bells that these new powers are a feature, not a bug.

There are already many insecurities in today’s networks, reaching down to the infrastructure layers of communication technology. The Signalling System No. 7, developed in 1975 to route phone calls, has become a major source of insecurity for cellphones. In 2017, the CBC demonstrated how hackers only needed a Canadian MP’s cell number to intercept his movements, text messages and phone calls. Little has changed since: A 2023 Citizen Lab report details pervasive vulnerabilities at the heart of the world’s mobile networks.

So it makes no sense that the Canadian government would itself seek the ability to create more holes, rather than patching them. Yet it is pushing for potential new powers that would infect next-generation cybersecurity tools with old diseases.

It’s not as if the government wasn’t warned. Citizen Lab researchers presented the 2023 report’s findings in parliamentary hearings on Bill C-26, and leaders and experts in civil society and in Canada’s telecommunications industry warned that the bill must be narrowed to prevent its broad powers to compel technical changes from being used to compromise the ”confidentiality, integrity, or availability” of telecommunication services. And yet, while government MPs maintained that their intent is not to expand surveillance capabilities, MPs pushed the bill out of committee without this critical amendment last month. In doing so, the government has set itself up to be the sole arbiter of when, and on what conditions, Canadians deserve security for their most confidential communications – personal, business, religious, or otherwise.

The new powers would only make people in Canada more vulnerable to malicious threats to the privacy and security of all network users, including Canada’s most senior officials. Encryption of 5G technology safeguards a web of connection points surrounding mobile communications, and protects users from man-in-the-middle attacks that intercept their text and voice communications or location data. The law would also impact cloud-connected smart devices like cars, home CCTV, or pacemakers, and satellite-based services like Starlink – all of which could be compromised by any new vulnerabilities.

Unfortunately, history is rife with government backdoors exposing individuals to deep levels of cyber-insecurity. Backdoors can be exploited by law enforcement, criminals and foreign rivals alike. For this reason, past heads of the CIA, the NSA and the U.S. Department of Homeland Security, as well as Britain’s Government Communications Headquarters (GCHQ) and MI5, all oppose measures that would weaken encryption. Interception equipment relied upon by governments has also often been shown to have significant security weaknesses.

The bill’s new spy powers also reveal incoherence in the government’s cybersecurity strategy. In 2022, Canada announced it would be blocking telecom equipment from Huawei and ZTE, citing the “cascading economic and security impacts” that a supply-chain breach would engender. The government cited concerns that the Chinese firms might be “compelled to comply with extrajudicial directions from foreign governments.” And yet, Bill C-26 would quietly provide Canada with the same authority that it publicly condemned. If the bill passes as-is, all telecom providers in Canada would be compellable through secret orders to weaken encryption or network equipment. It doesn’t just contradict Canada’s own pro-encryption policy and expert guidance – authoritarian governments abroad would also be able to point to Canada’s law to justify their own repressive security legislation.

Now, more than ever, there is no such thing as a safe backdoor. The GCHQ reports that the threat from commercial hacking firms will be “transformational on the cyber landscape,” and that cyber mercenaries wield capabilities rivalling that of state cyber-agencies. If the Canadian government compels telcos to undermine security features to accommodate surveillance, it will pave the way for cyberespionage firms and other adversaries to find more ways into people’s communications. A shortcut that provides a narrow advantage for the few at the expense of us all is no way to secure our complex digital ecosystem.

Against this threat landscape, a pivot is crucial. Canada needs cybersecurity laws that explicitly recognize that uncompromised encryption is the backbone of cybersecurity, and it must be mandated and protected by all means possible.

38
16

One wouldn’t have pegged Mastercard for that corporation that is “driving sustainable social impact” and caring about remote communities around the world struggling to meet basic needs.

Nevertheless, here we are – or at least that’s how the global payment services behemoth advertises its push to proliferate the use of a scheme called Community Pass.

The purpose of Community Pass is to enable a digital ID and wallet that’s contained in a “smart card.” Launched four years ago, the program – which Mastercard says, in addition to being based on digital ID, is interoperable, and works offline – targets “underserved communities” and currently has 3.5 million users, with plans of growing that number to 30 million by 2027.

According to a map on Mastercard’s site, this program is now being either piloted or has been rolled out in India, Ethiopia, Uganda, Kenya, Tanzania, Mozambique, and Mauritania, while the latest announcement is the partnership with the African Development Bank Group in an initiative dubbed, Mobilizing Access to the Digital Economy (MADE).

The plan is to, over ten years, make sure 100 million people and businesses in Africa are included in digital ID programs and thus allowed access to government and “humanitarian” services.

As for Community Pass itself, it aims to incorporate 15 million users on the continent over the next five years. This is Mastercard’s part of the deal, whereas the African Development Bank Group said it would invest $300 million to make sure MADE happens.

Given how controversial digital ID schemes are, and how much pushback they encounter in developed countries, it’s hard to shake off the impression that such initiatives are pushed so aggressively in economically disadvantaged areas and communities precisely because little opposition is expected.

But MADE is presented as almost a “humanitarian” service itself – there, apparently, solely to make life better, in particular for farmers and women, and improve things like connectivity, financial services, employment rates, etc.

The news about Mastercard’s latest partnership and initiative came from a US-Africa business forum organized by the US Chamber of Commerce.

39
18

Ireland’s media regulator (Coimisiún na Meán) has updated the Online Safety Code (part of the Online Safety Framework, a mechanism of the Online Safety and Media Regulation Act), and submitted it to the European Commission for assessment.

Considered by opponents as a censorship law that also imposes age verification or estimation (phrased as “age assurance”), the Code aims to establish binding rules for video platforms with EU headquarters located in Ireland.

It is expected that the European Commission will announce its position within 3 to 4 months, after which the rules will be finalized and put into effect, the regulator said.

Once greenlit by Brussels, the final version of the Code will impose obligations on platforms to ban uploading or sharing videos of what is considered to be cyberbullying, promoting self-harm or suicide, and promoting eating or feeding disorders.

But the list is much longer and includes content deemed to be inciting hatred or violence, terrorism, child sex abuse material, racism, and xenophobia.

Even though the new rules will inevitably give wide remit to censor video content as belonging to any of these many categories, and even though children are unavoidably mentioned as the primary concern, the Irish press reports that not everyone is satisfied with just how far the new Code goes.

One is a group called the Hope and Courage Collective (H&CC), whose purpose is apparently to “fight against far-right hate.” H&CC is worried that the Code will not be able to “keep elections safe” nor protect communities “targeted by hate.”

But what it will do, according to the media regulator’s statement, is to use “age assurance” as a way to prevent children from viewing inappropriate content, and do so via age verification measures.

The age verification controversy, however, doesn’t stem from (even if only declarative) intent behind it, but from the question of how it is supposed to be implemented, and how that implementation will stop short of undermining privacy and therefore security of all users of a platform.

Still, the Irish regulator is satisfied that its new code, along with the EU’s Digital Services Act and Terrorist Content Online Regulation, will give it “a strong suite of tools to improve people’s lives online.”

40
47

PayPal has announced that it is creating an ad platform “powered” by the data the payment service giant has from millions of both customers and merchants – specifically, from their transaction information.

The data harvesting here will be on by default, but PayPal users (Venmo is included in the scheme) will be able to opt out of what some critics refer to as yet another example of “financial surveillance.” The company’s massive business in the first quarter of this year alone amounted to 6.5 transactions processed for 427 million customers.

Sellers are promised that they will, thanks to the new platform, achieve better sales of products and services, while customers are told to expect the ads targeting them to show more “relevant” products.

A press release revealed that to bolster this side of its business, PayPal has appointed two executives – Mark Grether, formerly Uber Advertising VP and general manager, and John Anderson, who was previously head of product and payments at the fintech firm Plaid.

In this way, PayPal is joining others who are turning to using customer data to monetize targeted advertising. In the company’s industry, Visa and JPMorgan Chase have been making similar moves, while big retailers “share” this type of data with Big Tech.

The PayPal scheme is based on shopping habits and purchase information that allows advertisers to pinpoint their campaigns, and Grether explained that the company “knows” who is making purchases on the internet and where and that this data can be “leveraged.”

He also told the Wall Street Journal that customers who use PayPal cards in physical stores will become sources of the same type of data.

Other than this, however, not many other details are known at this time as to the exact type of data that will be “fed” into the new ad platform.

A spokesperson has offered vague responses to this query, stating that there are no “definitive answers” to that at this “early stage” of the platform’s creation.

But, Taylor Watson was sure to offer boilerplate assurances of transparency and privacy protections:

“Alongside the advertising business, PayPal will build transparent, easy-to-use privacy controls,” said this spokesperson.

41
8
submitted 4 weeks ago* (last edited 4 weeks ago) by c0mmando@links.hackliberty.org to c/privacy@links.hackliberty.org

Hello, today I am bringing you an updated (JUNE 01, 2024) guide on how to install Simplex SMP and XFTP servers using docker compose. This guide assumes you already have docker and docker compose installed – and also moves XFTP off default port of 443 due to reverse proxy conflicts.

42
36

The Biden administration is pushing for sweeping measures to combat the proliferation of nonconsensual sexual AI-generated images, including controversial proposals that could lead to extensive on-device surveillance and control of the types of images generated. In a White House press release, President Joe Biden’s administration outlined demands for the tech industry and financial institutions to curb the creation and distribution of abusive sexual images made with artificial intelligence (AI).

A key focus of these measures is the use of on-device technology to prevent the sharing of nonconsensual sexual images. The administration stated that “mobile operating system developers could enable technical protections to better protect content stored on digital devices and to prevent image sharing without consent.”

This proposal implies that mobile operating systems would need to scan and analyze images directly on users’ devices to determine if they are sexual or non-consensual. The implications of such surveillance raise significant privacy concerns, as it involves monitoring and analyzing private content stored on personal devices.

Additionally, the administration is calling on mobile app stores to “commit to instituting requirements for app developers to prevent the creation of non-consensual images.” This broad mandate would require a wide range of apps, including image editing and drawing apps, to scan and monitor user activities on devices, analyze what art they’re creating and block the creation of certain kinds of content. Once this technology of on-device monitoring becomes normalized, this level of scrutiny could extend beyond the initial intent, potentially leading to censorship of other types of content that the administration finds objectionable.

The administration’s call to action extends to various sectors, including AI developers, payment processors, financial institutions, cloud computing providers, search engines, and mobile app store gatekeepers like Apple and Google. By encouraging cooperation from these entities, the White House hopes to curb the creation, spread, and monetization of nonconsensual AI images.

The initiative builds on previous efforts, such as the voluntary commitments secured by the Biden administration from major technology companies like Amazon, Google, Meta, and Microsoft to implement safeguards on new AI systems. Despite these measures, the administration acknowledges the need for legislative action to enforce these safeguards comprehensively.

The administration’s proposals raise significant questions about privacy and the potential for mission creep. The call for on-device surveillance to detect and prevent the sharing of non-consensual sexual images means that personal photos and content would be subject to continuous monitoring and analysis. Through Photoshop and other tools, people have been able to generate images on their devices for decades but the recent evolution of AI is being used to now call for surveillance of the content people create.

This could set a precedent for more extensive and intrusive forms of digital content scanning, leading to broader applications beyond the original intent.

43
25

The EU’s European Council has followed the European Parliament (EP) in approving the AI Act – which opponents say is a way for the bloc to legalize biometric mass surveillance.

More than that, the EU is touting the legislation as first of its kind in the world, and seems hopeful it will serve as a standard for AI regulation elsewhere around the globe.

The Council announced the law is “groundbreaking,” taking a “risk-based” approach, meaning that the EU authorities get to grade the level of risk from AI to society and then impose rules of various levels of severity and penalties, including money fines for companies deemed to be infringing the act.

What this “granular” approach to “risk level” looks like is revealed in the fact that what the EU chooses to consider cognitive behavioral manipulation “unacceptable,” while AI use in education and facial recognition is “high risk. “Limited risk” applies to chatbots.

And developers will be under obligation to register in order to have the “risk” assessed before their apps become available to users in the EU.

The AI Act’s ambition, according to the EU, is to promote both the development and uptake, as well as investment in systems that it considers “safe and trustworthy,” targeting both private and public sectors for this type of regulation.

A press release said that the law “provides exemptions such as for systems used exclusively for military and defense as well as for research purposes.”

After the act is formally published, it will within three weeks come into effect across the 27-member countries.

Back in March, when the European Parliament approved the act, one of its members, Patrick Breyer of the German Pirate Party, slammed the preceding trilogue negotiations as “intransparent.”

But what was clear, according to this lawyer and privacy advocate, is that participants from the EP, initially saying they wanted to see a ban on real-time biometric mass surveillance in mass places, did a 180 and in the end agreed to “legitimize” it through AI Act’s provisions.

Breyer said that identification relying on CCTV footage is prone to errors that can have serious consequences – but that “none of these dystopian technologies will be off limits for EU governments” (thanks to the new law).

“As important as it is to regulate AI technology, defending our democracy against being turned into a high-tech surveillance state is not negotiable for us Pirates,” Breyer wrote at the time.

44
28

EU governments might soon endorse the highly controversial Child Sexual Abuse Regulation (CSAR), known colloquially as “chat control,” based on a new proposal by Belgium’s Minister of the Interior. According to a leak obtained by Pirate Party MEP and shadow rapporteur Patrick Breyer, this could happen as early as June.

The proposal mandates that users of communication apps must agree to have all images and videos they send automatically scanned and potentially reported to the EU and police.

This agreement would be obtained through terms and conditions or pop-up messages. To facilitate this, secure end-to-end encrypted messenger services would need to implement monitoring backdoors, effectively causing a ban on private messaging. The Belgian proposal frames this as “upload moderation,” claiming it differs from “client-side scanning.” Users who refuse to consent would still be able to send text messages but would be barred from sharing images and videos.

The scanning technology, employing artificial intelligence, is intended to detect known child sexual abuse material (CSAM) and flag new images and videos deemed suspicious. The proposal excludes the previously suggested scanning of text messages for grooming signs and does not address audio communication scanning, which has never been implemented.

The proposal first introduced on 8 May, has surprisingly gained support from several governments that were initially critical. It will be revisited on 24 May, and EU interior ministers are set to meet immediately following the European elections to potentially approve the legislation.

Patrick Breyer, a staunch opponent of chat control, expressed serious concerns. “The leaked Belgian proposal means that the essence of the EU Commission’s extreme and unprecedented initial chat control proposal would be implemented unchanged,” he warns. “Using messenger services purely for texting is not an option in the 21st century. And removing excesses that aren’t being used in practice anyway is a sham.”

Breyer emphasizes the threat to digital privacy, stating, “Millions of private chats and private photos of innocent citizens are to be searched using unreliable technology and then leaked without the affected chat users being even remotely connected to child sexual abuse – this would destroy our digital privacy of correspondence. Our nude photos and family photos would end up with strangers in whose hands they do not belong and with whom they are not safe.”

He also points out the risk to encryption, noting that “client-side scanning would undermine previously secure end-to-end encryption to turn our smartphones into spies – this would destroy secure encryption.”

Breyer is alarmed by the shifting stance of previously critical EU governments, which he fears could break the blocking minority and push the proposal forward. He criticizes the lack of a legal opinion from the Council on this fundamental rights issue. “If the EU governments really do go into the trilogue negotiations with this radical position of indiscriminate chat control scanning, experience shows that the Parliament risks gradually abandoning its initial position behind closed doors and agreeing to bad and dangerous compromises that put our online security at risk,” he asserts.

45
58

X, formerly Twitter, is now mandating the use of a government ID-based account verification system for users that earn revenue on the platform – either for advertising or for paid subscriptions.

To implement this system, X has partnered with Au10tix, an Israeli company known for its identity verification solutions. Users who opt to receive payouts on the platform will have to undergo a verification process with the company.

This initiative aims to curb impersonation, fraud, and improve user support, yet it also raises profound questions about privacy and free speech, as X markets itself as a free speech platform, and free speech and anonymity often go hand-in-hand. This is especially true in countries where their speech can get citizens jailed or worse.

“We’re making changes to our Creator Subscriptions and Ads Revenue Share programs to further promote authenticity and fight fraud on the platform. Starting today, all new creators must verify their ID to receive payouts. All existing creators must do so by July 1, 2024,” the update to X’s verification page now reads.

This shift towards online digital ID verification is part of a broader trend across the political sphere, where the drive for identification often conflicts with the desire for privacy and anonymous speech. By linking online identities to government-issued IDs, platforms like X may stifle expression, as users become wary of speaking freely when their real identities are known.

This policy shift signals a move towards more accurate but also more intrusive forms of user identification. Although intended to enhance security, these practices risk undermining the very essence of free speech by making users feel constantly monitored and raise fears that, in the near future, all speech on major platforms will have to be linked to a government-issued ID.

Anonymity has long been a cornerstone of free speech, allowing individuals to express controversial, dissenting, or unpopular opinions without fear of retribution. Throughout history, anonymous speech has been a critical tool for activists, whistleblowers, and ordinary citizens alike. It enables people to criticize their governments, expose corruption, and share personal experiences without risking their safety or livelihoods.

Governments around the world have been pushing for an end to online anonymity over the last year, and X’s new policy change is a step towards this agenda.

Over the last year, a slew of child safety bills has emerged, ostensibly aimed at protecting the youngest internet users. However, beneath the surface of these well-intentioned initiatives lies a more insidious agenda: the push for widespread online ID verification.

X owner Elon Musk has commented in support of these bills, as recently as last week.

While this new X change is only for those users looking to claim a cut of the advertising revenue that X makes from their posts and is not yet enforced for all users, it is a large step towards the normalizing of online digital ID verification.

46
87

The US Department of Commerce is seeking to end the right of users of cloud services to remain anonymous.

The proposal first emerged in January, documents show, detailing new rules (National Emergency with Respect to Significant Malicious Cyber-Enabled Activities) for Infrastructure as a Service (IaaS) providers, which include Know Your Customer (KYC) regulation, which is normally used by banks and financial institutions.

But now, the US government is citing concerns over “malicious foreign actors” and their usage of these services as a reason to effectively end anonymity on the cloud, including when only signing up for a trial.

Another new proposal from the notice is to cut access to US cloud services to persons designated as “foreign adversaries.”

As is often the case, although the justification for such measures is a foreign threat, US citizens inevitably, given the nature of the infrastructure in question, get caught up as well. And, once again, to address a problem caused by a few users, everyone will be denied the right to anonymity.

That would these days be any government’s dream, it appears, while the industry itself, especially the biggest players like Amazon, can implement the identification feature with ease, at the same time gaining a valuable new source of personal data.

The only losers here appear to be users of IaaS platforms, who will have to allow tech giants yet another way of accessing their sensitive personal information and risk losing it through leaks.

Meanwhile, the actual malicious actors will hardly give up those services – leaked personal data that can be sold and bought illegally, including by those the proposal says it is targeting.

Until now, providers of cloud services felt no need to implement a KYC regime, instead allowing people to become users, or try their products, simply by providing an email, and a valid credit card in case they signed up for a plan.

As for what the proposal considers to be an IaaS, the list is long and includes services providing processing, storage, networks, content delivery networks (CDNs), virtual private servers (VPSs), proxies, domain name resolution services, and more.

47
5

Elon Musk stopped just short of explicitly endorsing two New York state online child safety bills even though, for the proposals to work, platforms would have to implement age verification and digital ID for people to access online platforms.

The X owner’s reaction to a post about Meta and Google reportedly spending more than a million as they lobby against the bills read, “In sharp contrast, X supports child safety bills.”

It remains unclear whether Musk expressed his support for these particular bills – New York Senate Bill S7694 and Bill S3281 – or the legislative efforts in general to make the internet a safer place for minors. Another possibility is that he was not missing a chance to criticize the competition.

Either way, there are two problems with such efforts that keep cropping up in various jurisdictions: very often, the proposed laws are far broader, but use the issue of protecting children as the main talking point to shut up any opposition.

And, as in this case, they call for some form of age verification to be introduced, which is only doable by identifying everyone who visits sites or uses platforms, undermining online anonymity, and curbing free speech.

A press source who criticized Google and Meta for their lobbying effort (while speaking on condition of anonymity) said the bills’ provisions are “reasonable;” at least, most of them.

On the reasonable side is Bill S7694’s intention to, by amending general business law, make sure minors do not encounter “addictive” feeds on the social media they use.

This would be achieved by showing chronological rather than algorithmically manipulated feeds to those established to be minors.

Another provision is to limit the time and access these users can spend on the sites during the night, as a health benefit.

Bill S3281 deals with child data privacy, seeking to ban the harvesting of this data (and subsequent targeted advertising), as well as requiring “data controllers to assess the impact of its products on children for review by the Bureau of Internet and Technology.”

But the elephant in the room is – how are platforms supposed to know a user’s actual age?

This is where age verification comes in: the bills speak about using “commercially reasonable methods” to make sure a user is not a minor, and age verification through digital ID is also demanded to achieve “verifiable parental consent.”

48
14

The Australian Digital ID Law (Digital ID Bill 2024), which already passed the Senate, was adopted by Australia’s House of Representatives in an 87-56 vote.

Australia is joining the EU and several countries who seek to get rid of people’s physical IDs and replace them with digital schemes that pool the most sensitive personal information into massive, centralized databases.

This is considered by opponents as a security and privacy catastrophe in the making, with many purely political (ab)uses possible down the road.

In Australia, the goal is to get government services, health insurance, taxes, etc, all linked. And to do this, the governments will spend just shy of $197 million to launch the scheme.

MPs from the parties who voted against the bill – the Liberal-National Opposition – said that their constituents were worried about privacy, their freedoms in general, and government intervention.

Once again, arguments such as “convenience” – clearly a lopsided trade-off considering the gravity of these concerns – are offered to assuage them, and the point is made that participation is not mandatory.

At least not yet, and not explicitly.

Liberal Senator Alex Antic touched on precisely this point – an example being that the bill allows people to open bank accounts without digital IDs “by going to the nearest branch.”

But then – physical bank branches are now closing at a quick rate, Antic remarked.

Even more taxpayer money is being spent in Australia in order to shore up the Online Safety Act, and the eSafety program.

The censorship effort, which, like so many, refers to its purpose allegedly being merely to “protect the children” is in reality set up to hunt down whatever the government decides qualifies as “harmful content.” Now the federal budget is earmarking millions for several projects, including a pilot worth $6.5 million that is supposed to produce an online age verification method (this is referred to as “age assurance technology”).

Meanwhile, “emerging online threats” will get a share from the total of $43.2 million set aside in the budget’s communications package.

The eSafety Commissioner’s office will get $1.4 million over the next two years.

49
41

Just as the EU’s police organization, Europol, continues to argue in favor of introducing encryption backdoors, which would dangerously undermine security on the internet – it is proving unable to protect its own data.

And that is even with the capabilities afforded to Europol and everybody else by the encryption standards currently in place.

Namely, Europol has suffered an embarrassing data breach this May, with the database reportedly surfacing on the dark web. It is said to contain official use-only documents, internal documents, source code, and possibly also classified information.

Europol has confirmed the incident but is attempting to reassure the public that its significance is low, since allegedly, operational information has not been leaked, while its key systems are “unaffected.”

Meanwhile, reports based on the dark web offer of the sensitive data say it was taken from the European Cybercrime Center (EC3), the Europol Platform for Experts (EPE), the Law Enforcement Forum, and the SIRIUS platform for electronic evidence.

Merely weeks ago, Europol was pushing for an internet even less secure than it is today, repeating the arguments heard many times from various law enforcement bodies around the world, who claim that undermining encryption is necessary for them to do their job.

Europol’s European Police Chiefs convention came up with a joint declaration that urged both governments and the tech industry to prevent the implementation of end-to-end encryption on social platforms – Meta (Facebook) moving in this direction was the immediate reason for this reaction.

To justify the desire to (continue) to have unobstructed access to people’s private communications, including on messaging apps, the EU law enforcement agency said end-to-end encryption would hinder investigations and evidence gathering.

And since “war is peace, freedom is slavery…” what those companies attempting to incorporate encryption in their apps rightly consider to be a step that enhances their users’ security and privacy – Europol considers it to be a threat to public safety, while lack of encryption is framed as “secured digital environment.”

“Our homes are becoming more dangerous than our streets as crime is moving online. To keep our society and people safe, we need this digital environment to be secured,” Europol’s Executive Director Catherine De Bolle was quoted as saying at the time.

“Tech companies have a social responsibility to develop a safer environment where law enforcement and justice can do their work. If police lose the ability to collect evidence, our society will not be able to protect people from becoming victims of crime,” De Bolle added.

50
10
submitted 1 month ago* (last edited 1 month ago) by c0mmando@links.hackliberty.org to c/privacy@links.hackliberty.org

There was a lot of talk about the EU’s Digital Services Act (DSA) while it was drafted and during the typical-of-the-bloc tortuous process of adoption, but now that it’s been here for a while, we’ve been getting a sense of how it is being put to use.

Utilizing the European digital ID wallet to carry out age verification is just one of the fever pitch ideas here. And EU bureaucrats are trying to make sure that these controversial policies are presented as perfectly in line with how DSA was originally pitched.

The regulation was slammed by opponents as in reality a sweeping online censorship law hiding behind focused, and noble, declarations that its goal was to protect children’s well-being, fight disinformation, etc.

The cold hard reality is that trying to (further) turn the screw – any which way they can – on platforms with the most reach and most influence ahead of an election is simply something that those in power, whether it’s the US or the EU, don’t seem to be able to resist.

Here’s the European Commission (who’s current president is actively campaigning to get reappointed in the wake of next month’s European Parliament elections) opening an investigation into Meta on suspicion its flagship platforms, Facebook and Instagram, create “addictive behavior among children and damage mental health.”

After all, exerting a bit more pressure on social media just before an election never hurt anybody. /s

Thierry Breton, an EU commissioner who made a name for himself as a proponent of all sorts of online speech restrictions during the current, soon to expire European Commission mandate, reared his head again here:

“We open formal proceedings against Meta. We are not convinced that it has done enough to comply with the DSA obligations to mitigate the risks of negative effects to the physical and mental health of young Europeans on its platforms Facebook and Instagram,” Breton said in a press release.

And as the EU investigates “potential addictive impacts of the platforms (…) such as on unrealistic body image” – something not potential, but very concrete will also be under scrutiny: how effective Meta’s age verification tools are.

The grounds for these suspicions lie in the (DSA). With this pro-censorship legislation, which was instituted last summer, even major tech firms can now be held liable for online malevolence from “misinformation” to shopping swindles, all the way to child endangerment.

Even though pushing age verification pushes digital ID and affects everybody’s privacy on the internet, due to the nature of the technology necessary to achieve such a result – like providing your copies of government-issued identification documents – Breton made sure to appear this was purely a “think of the children” moment:

“We are sparing no effort to protect our children,” Breton said.

The investigation aims to substantiate the so-called “rabbit hole” effects that these platforms could have, in which they reportedly expose the youth to potentially damaging content about unrealistic physical appearances, amongst other things. The probe also aims to determine the levels of efficacy of Meta’s age-validation processes and child privacy safeguards. “We are sparing no effort to protect our children,” reinforced Breton.

The “rabbit hole” narrative, which suggests that social media platforms like Facebook and Instagram can lead users down paths of addictive and potentially harmful content, brings to light significant questions, especially regarding how Meta is using algorithms to control what people see.

While the European Commission’s investigation into Meta on the surface seeks to protect the mental health of minors, it also raises the problem of increased censorship on these platforms.

If the commission substantiates the claims of the “rabbit hole” effect, it may prompt stringent regulatory measures aimed at curbing the exposure of harmful content to young users, but that could also bring about several behind-the-scenes algorithmic changes that suppress controversial content.

In the past, popular content producers such as Joe Rogan, have been maligned as being a gateway to such “rabbit hole” content, and arguments similar to what the EU is making have been used to call for online censorship.

Meta has firmly defended its position, with a spokesperson stating, “We want young people to have safe, age-appropriate experiences online and have spent a decade developing more than 50 tools and policies designed to protect them. This is a challenge the whole industry is facing, and we look forward to sharing details of our work with the European Commission.”

view more: ‹ prev next ›

Privacy

1 readers
19 users here now

Privacy is the ability for an individual or group to seclude themselves or information about themselves, and thereby express themselves selectively.

Rules

  1. Don't do unto others what you don't want done unto you.
  2. No Porn, Gore, or NSFW content. Instant Ban.
  3. No Spamming, Trolling or Unsolicited Ads. Instant Ban.
  4. Stay on topic in a community. Please reach out to an admin to create a new community.

founded 1 year ago
MODERATORS