51
58

X, formerly Twitter, is now mandating the use of a government ID-based account verification system for users that earn revenue on the platform – either for advertising or for paid subscriptions.

To implement this system, X has partnered with Au10tix, an Israeli company known for its identity verification solutions. Users who opt to receive payouts on the platform will have to undergo a verification process with the company.

This initiative aims to curb impersonation, fraud, and improve user support, yet it also raises profound questions about privacy and free speech, as X markets itself as a free speech platform, and free speech and anonymity often go hand-in-hand. This is especially true in countries where their speech can get citizens jailed or worse.

“We’re making changes to our Creator Subscriptions and Ads Revenue Share programs to further promote authenticity and fight fraud on the platform. Starting today, all new creators must verify their ID to receive payouts. All existing creators must do so by July 1, 2024,” the update to X’s verification page now reads.

This shift towards online digital ID verification is part of a broader trend across the political sphere, where the drive for identification often conflicts with the desire for privacy and anonymous speech. By linking online identities to government-issued IDs, platforms like X may stifle expression, as users become wary of speaking freely when their real identities are known.

This policy shift signals a move towards more accurate but also more intrusive forms of user identification. Although intended to enhance security, these practices risk undermining the very essence of free speech by making users feel constantly monitored and raise fears that, in the near future, all speech on major platforms will have to be linked to a government-issued ID.

Anonymity has long been a cornerstone of free speech, allowing individuals to express controversial, dissenting, or unpopular opinions without fear of retribution. Throughout history, anonymous speech has been a critical tool for activists, whistleblowers, and ordinary citizens alike. It enables people to criticize their governments, expose corruption, and share personal experiences without risking their safety or livelihoods.

Governments around the world have been pushing for an end to online anonymity over the last year, and X’s new policy change is a step towards this agenda.

Over the last year, a slew of child safety bills has emerged, ostensibly aimed at protecting the youngest internet users. However, beneath the surface of these well-intentioned initiatives lies a more insidious agenda: the push for widespread online ID verification.

X owner Elon Musk has commented in support of these bills, as recently as last week.

While this new X change is only for those users looking to claim a cut of the advertising revenue that X makes from their posts and is not yet enforced for all users, it is a large step towards the normalizing of online digital ID verification.

52
87

The US Department of Commerce is seeking to end the right of users of cloud services to remain anonymous.

The proposal first emerged in January, documents show, detailing new rules (National Emergency with Respect to Significant Malicious Cyber-Enabled Activities) for Infrastructure as a Service (IaaS) providers, which include Know Your Customer (KYC) regulation, which is normally used by banks and financial institutions.

But now, the US government is citing concerns over “malicious foreign actors” and their usage of these services as a reason to effectively end anonymity on the cloud, including when only signing up for a trial.

Another new proposal from the notice is to cut access to US cloud services to persons designated as “foreign adversaries.”

As is often the case, although the justification for such measures is a foreign threat, US citizens inevitably, given the nature of the infrastructure in question, get caught up as well. And, once again, to address a problem caused by a few users, everyone will be denied the right to anonymity.

That would these days be any government’s dream, it appears, while the industry itself, especially the biggest players like Amazon, can implement the identification feature with ease, at the same time gaining a valuable new source of personal data.

The only losers here appear to be users of IaaS platforms, who will have to allow tech giants yet another way of accessing their sensitive personal information and risk losing it through leaks.

Meanwhile, the actual malicious actors will hardly give up those services – leaked personal data that can be sold and bought illegally, including by those the proposal says it is targeting.

Until now, providers of cloud services felt no need to implement a KYC regime, instead allowing people to become users, or try their products, simply by providing an email, and a valid credit card in case they signed up for a plan.

As for what the proposal considers to be an IaaS, the list is long and includes services providing processing, storage, networks, content delivery networks (CDNs), virtual private servers (VPSs), proxies, domain name resolution services, and more.

53
5

Elon Musk stopped just short of explicitly endorsing two New York state online child safety bills even though, for the proposals to work, platforms would have to implement age verification and digital ID for people to access online platforms.

The X owner’s reaction to a post about Meta and Google reportedly spending more than a million as they lobby against the bills read, “In sharp contrast, X supports child safety bills.”

It remains unclear whether Musk expressed his support for these particular bills – New York Senate Bill S7694 and Bill S3281 – or the legislative efforts in general to make the internet a safer place for minors. Another possibility is that he was not missing a chance to criticize the competition.

Either way, there are two problems with such efforts that keep cropping up in various jurisdictions: very often, the proposed laws are far broader, but use the issue of protecting children as the main talking point to shut up any opposition.

And, as in this case, they call for some form of age verification to be introduced, which is only doable by identifying everyone who visits sites or uses platforms, undermining online anonymity, and curbing free speech.

A press source who criticized Google and Meta for their lobbying effort (while speaking on condition of anonymity) said the bills’ provisions are “reasonable;” at least, most of them.

On the reasonable side is Bill S7694’s intention to, by amending general business law, make sure minors do not encounter “addictive” feeds on the social media they use.

This would be achieved by showing chronological rather than algorithmically manipulated feeds to those established to be minors.

Another provision is to limit the time and access these users can spend on the sites during the night, as a health benefit.

Bill S3281 deals with child data privacy, seeking to ban the harvesting of this data (and subsequent targeted advertising), as well as requiring “data controllers to assess the impact of its products on children for review by the Bureau of Internet and Technology.”

But the elephant in the room is – how are platforms supposed to know a user’s actual age?

This is where age verification comes in: the bills speak about using “commercially reasonable methods” to make sure a user is not a minor, and age verification through digital ID is also demanded to achieve “verifiable parental consent.”

54
14

The Australian Digital ID Law (Digital ID Bill 2024), which already passed the Senate, was adopted by Australia’s House of Representatives in an 87-56 vote.

Australia is joining the EU and several countries who seek to get rid of people’s physical IDs and replace them with digital schemes that pool the most sensitive personal information into massive, centralized databases.

This is considered by opponents as a security and privacy catastrophe in the making, with many purely political (ab)uses possible down the road.

In Australia, the goal is to get government services, health insurance, taxes, etc, all linked. And to do this, the governments will spend just shy of $197 million to launch the scheme.

MPs from the parties who voted against the bill – the Liberal-National Opposition – said that their constituents were worried about privacy, their freedoms in general, and government intervention.

Once again, arguments such as “convenience” – clearly a lopsided trade-off considering the gravity of these concerns – are offered to assuage them, and the point is made that participation is not mandatory.

At least not yet, and not explicitly.

Liberal Senator Alex Antic touched on precisely this point – an example being that the bill allows people to open bank accounts without digital IDs “by going to the nearest branch.”

But then – physical bank branches are now closing at a quick rate, Antic remarked.

Even more taxpayer money is being spent in Australia in order to shore up the Online Safety Act, and the eSafety program.

The censorship effort, which, like so many, refers to its purpose allegedly being merely to “protect the children” is in reality set up to hunt down whatever the government decides qualifies as “harmful content.” Now the federal budget is earmarking millions for several projects, including a pilot worth $6.5 million that is supposed to produce an online age verification method (this is referred to as “age assurance technology”).

Meanwhile, “emerging online threats” will get a share from the total of $43.2 million set aside in the budget’s communications package.

The eSafety Commissioner’s office will get $1.4 million over the next two years.

55
41

Just as the EU’s police organization, Europol, continues to argue in favor of introducing encryption backdoors, which would dangerously undermine security on the internet – it is proving unable to protect its own data.

And that is even with the capabilities afforded to Europol and everybody else by the encryption standards currently in place.

Namely, Europol has suffered an embarrassing data breach this May, with the database reportedly surfacing on the dark web. It is said to contain official use-only documents, internal documents, source code, and possibly also classified information.

Europol has confirmed the incident but is attempting to reassure the public that its significance is low, since allegedly, operational information has not been leaked, while its key systems are “unaffected.”

Meanwhile, reports based on the dark web offer of the sensitive data say it was taken from the European Cybercrime Center (EC3), the Europol Platform for Experts (EPE), the Law Enforcement Forum, and the SIRIUS platform for electronic evidence.

Merely weeks ago, Europol was pushing for an internet even less secure than it is today, repeating the arguments heard many times from various law enforcement bodies around the world, who claim that undermining encryption is necessary for them to do their job.

Europol’s European Police Chiefs convention came up with a joint declaration that urged both governments and the tech industry to prevent the implementation of end-to-end encryption on social platforms – Meta (Facebook) moving in this direction was the immediate reason for this reaction.

To justify the desire to (continue) to have unobstructed access to people’s private communications, including on messaging apps, the EU law enforcement agency said end-to-end encryption would hinder investigations and evidence gathering.

And since “war is peace, freedom is slavery…” what those companies attempting to incorporate encryption in their apps rightly consider to be a step that enhances their users’ security and privacy – Europol considers it to be a threat to public safety, while lack of encryption is framed as “secured digital environment.”

“Our homes are becoming more dangerous than our streets as crime is moving online. To keep our society and people safe, we need this digital environment to be secured,” Europol’s Executive Director Catherine De Bolle was quoted as saying at the time.

“Tech companies have a social responsibility to develop a safer environment where law enforcement and justice can do their work. If police lose the ability to collect evidence, our society will not be able to protect people from becoming victims of crime,” De Bolle added.

56
10
submitted 1 month ago* (last edited 1 month ago) by c0mmando@links.hackliberty.org to c/privacy@links.hackliberty.org

There was a lot of talk about the EU’s Digital Services Act (DSA) while it was drafted and during the typical-of-the-bloc tortuous process of adoption, but now that it’s been here for a while, we’ve been getting a sense of how it is being put to use.

Utilizing the European digital ID wallet to carry out age verification is just one of the fever pitch ideas here. And EU bureaucrats are trying to make sure that these controversial policies are presented as perfectly in line with how DSA was originally pitched.

The regulation was slammed by opponents as in reality a sweeping online censorship law hiding behind focused, and noble, declarations that its goal was to protect children’s well-being, fight disinformation, etc.

The cold hard reality is that trying to (further) turn the screw – any which way they can – on platforms with the most reach and most influence ahead of an election is simply something that those in power, whether it’s the US or the EU, don’t seem to be able to resist.

Here’s the European Commission (who’s current president is actively campaigning to get reappointed in the wake of next month’s European Parliament elections) opening an investigation into Meta on suspicion its flagship platforms, Facebook and Instagram, create “addictive behavior among children and damage mental health.”

After all, exerting a bit more pressure on social media just before an election never hurt anybody. /s

Thierry Breton, an EU commissioner who made a name for himself as a proponent of all sorts of online speech restrictions during the current, soon to expire European Commission mandate, reared his head again here:

“We open formal proceedings against Meta. We are not convinced that it has done enough to comply with the DSA obligations to mitigate the risks of negative effects to the physical and mental health of young Europeans on its platforms Facebook and Instagram,” Breton said in a press release.

And as the EU investigates “potential addictive impacts of the platforms (…) such as on unrealistic body image” – something not potential, but very concrete will also be under scrutiny: how effective Meta’s age verification tools are.

The grounds for these suspicions lie in the (DSA). With this pro-censorship legislation, which was instituted last summer, even major tech firms can now be held liable for online malevolence from “misinformation” to shopping swindles, all the way to child endangerment.

Even though pushing age verification pushes digital ID and affects everybody’s privacy on the internet, due to the nature of the technology necessary to achieve such a result – like providing your copies of government-issued identification documents – Breton made sure to appear this was purely a “think of the children” moment:

“We are sparing no effort to protect our children,” Breton said.

The investigation aims to substantiate the so-called “rabbit hole” effects that these platforms could have, in which they reportedly expose the youth to potentially damaging content about unrealistic physical appearances, amongst other things. The probe also aims to determine the levels of efficacy of Meta’s age-validation processes and child privacy safeguards. “We are sparing no effort to protect our children,” reinforced Breton.

The “rabbit hole” narrative, which suggests that social media platforms like Facebook and Instagram can lead users down paths of addictive and potentially harmful content, brings to light significant questions, especially regarding how Meta is using algorithms to control what people see.

While the European Commission’s investigation into Meta on the surface seeks to protect the mental health of minors, it also raises the problem of increased censorship on these platforms.

If the commission substantiates the claims of the “rabbit hole” effect, it may prompt stringent regulatory measures aimed at curbing the exposure of harmful content to young users, but that could also bring about several behind-the-scenes algorithmic changes that suppress controversial content.

In the past, popular content producers such as Joe Rogan, have been maligned as being a gateway to such “rabbit hole” content, and arguments similar to what the EU is making have been used to call for online censorship.

Meta has firmly defended its position, with a spokesperson stating, “We want young people to have safe, age-appropriate experiences online and have spent a decade developing more than 50 tools and policies designed to protect them. This is a challenge the whole industry is facing, and we look forward to sharing details of our work with the European Commission.”

57
15

Belgium and Hungary are leading the way in launching digital ID wallets ahead of EU’s eIDAS (“electronic identification and trust services”) 2.0 regulation and EUDI Wallet coming into force later this month.

In Belgium, the MyGov.be app, covering all of the country’s federal public services, was launched on Tuesday, with the government promoting the digital identity as “simplifying” the use of its services, and “making life easier.”

In other words, the authorities there are playing the convenience card – while downplaying the risks that come with this type of centralization of people’s identities.

The wallet, via “eBox” mailbox, gives access to government-issued documents, as well as 683 services, identity data, Covid vaccination records, and more.

However, the success of the scheme is by no means guaranteed – on the one hand it is not mandatory, so people are free to decide not to use it.

Judging by an opinion poll Deloitte carried out last year, “71 percent of Belgians do not want a digital ID on their phone,” reports say, adding that the same survey showed that 79 percent “do not want a mobile driver’s license, while half refuse to fully digitize their IDs.”

“Ease of use” is also how digital ID is pushed in Hungary, where the appropriate app will be made available for download as soon as this week, while the service will be fully operational from September.

Enthusiastic reports about this development describe the digital ID program as “innovative,” “handy” and “saving costs.”

At the same time, putting all of a person’s data in one place and storing it in the cloud is advertised as something positive, instead of what opponents consider as scary – from the security standpoint alone.

The operation is centralized via Hungary’s Digital Citizenship Program. Wallet users will have all their administrative, etc., documents in that one place, plus a digital mailbox, their vaccine passport, and can rely on a pan-EU electronic signature, while electric payments will be “fully integrated.”

Hungary’s digital identity scheme uses biometric data obtained through facial recognition to match a user’s face “against a government database.”

Meanwhile, some security researchers in other EU countries are warning that as many as 75 percent of what are described as high-risk organizations (healthcare, banking, air travel…) “do not use the strongest verification method.”

58
101

Visa, one of the world’s two largest payment card services, is launching new, proprietary technology that will allow it to give retailers even more data collected from its customers.

The move is seen as Visa working hard to keep up competing with the other giant – Mastercard – but also, fintech firms like Plaid.

The latter’s business, in terms of Visa considering it a rival, is revealing: it’s to power fintech and associated products with a data transfer network – specifically, a platform that “enables applications to connect with [a] user’s bank account.”

Visa’s “fear of missing out” on another lucrative personal data and customer behavior-based money grab is taking the form of “tokens” which allow banks and merchants to communicate so that banks can share customer data that offers insight into their preferences based on past transactions.

Reports say that this requires customers’ consent – but then also quote Visa Chief Executive Officer Ryan McInerney as saying, “It’s almost entirely blind to almost all consumers. They just know their payments work better.”

McInerney came up with a brand new way to phrase “opt-out” – he said the tokens come with consent “as the foundational premise.” The visa exec brazenly referred to this as “putting [the] customer in control”:

“Consumers will have the option, through their bank app, to revoke access to their information.”

Visa’s clearly banking – pun intended – on their customers accepting a “mirrors for gold” type of deal – giving up their valuable and sensitive personal information opted-in by default, while poorly if at all aware of its worth, in exchange for a “shiny object” – in this case, a little more convenience.

And while this angle may at this point be lost on most people, Visa and its ilk seem to be counting on just that.

“Better shopping experience” is how Visa phrases it. Some type of AI (one ventures to guess, machine learning) is involved in the closed-source software now rolled out that has access to huge personal information datasets of the kind Visa has.

Visa users might like to know that the “new sharing of shopping data through tokens” will debut as a pilot at an as yet unspecified date “later this year.”

59
32

UK Prime Minister Rishi Sunak has thrown what political weight he has behind the use of the hotly debated live facial recognition technology.

The direct endorsement of the controversial tech has come in a speech that’s part of his election campaign; he uses this type of tech with talking points such as fighting crime more efficiently.

But Sunak didn’t stop there, also asking the Policy Exchange UK conference audience to “imagine a welfare system where new technologies allow us to crack down on the fraudsters exploiting the hard-working taxpayers who fund it” – which is seen as an implicit endorsement of mass government surveillance of people’s bank accounts.

But despite Sunak’s attempt to talk up these schemes as beneficial to society, both bank account surveillance and live facial recognition are dismissed by privacy advocates like Big Brother Watch as dystopian threats to people’s civil rights and privacy.

The trade-off between potentially making catching criminals and fraudsters easier, and the entire population having their financial and physical privacy undermined is unacceptable to this non-profit, which calls the policies of bank spying and facial recognition surveillance expansion, with dangerous implications.

And, Big Brother Watch offered its “translation” of Sunak’s statements: “Imagine this: The Government spying on ALL of our bank accounts on the premise of detecting welfare fraud & error…

Police taking your face prints as you go about your day on your high street

Imagine, a nation of suspects.”

In a bid to avoid this, a petition has been launched to prevent the government from gaining the new powers around bank account surveillance, calling them, on the one hand, redundant – as the government is already equipped to deal with fraud and misuse of public money.

But on the other hand, everyone’s privacy and other rights would be jeopardized, and subject to the government’s possibly arbitrary – since secret – criteria that banks would have to follow.

In the meantime, Sunak is putting taxpayer money where his AI-surveillance-supporting mouth is, with reports saying that the equivalent of $69.5 million was recently set aside to allow the police to “accelerate” facial recognition deployment. And that is only one of the initiatives in this space.

Sunak chose to “sit on two AI chairs” – which comes down to, “AI is good if used by the government, and also good as an excuse to ramp up censorship under the guise of fighting online harms.”

Preventing government abuse through surveillance, appropriate regulation, and transparency, do not seem to be top of mind when the current British PM talks about AI, though.

60
14

Digital scams have become increasingly sophisticated and Google’s latest innovation offers a promising defense mechanism. But, like with all things Google promotes as progress, how it impacts lives and the implications of the technology could become a major civil liberties issue – especially anti-privacy EU officials are likely salivating at the idea of the technology.

Announced at the I/O developer conference, the company is testing a new call monitoring feature designed to protect Android users from phone scams. This feature leverages Gemini Nano, a streamlined version of Google’s Gemini large language model, which can run locally on devices to detect fraudulent language and patterns during calls, alerting users in real time. While this development is a significant step forward in combating scams, it also raises crucial questions about privacy and the potential for broader applications that could infringe on personal freedoms.

Gemini Nano: A Powerful Tool Against Scams

Google’s new feature utilizes advanced AI to scan for signs of scamming behavior, such as requests for personal information, urgent money transfers, and payments via gift cards. By operating entirely on-device, Gemini Nano ensures that conversations remain private and supposedly (and in theory) do not need to be sent to external servers for processing.

The EU’s Perspective: From Child Safety to Potential Overreach

The European Union has been at the forefront of legislative efforts to regulate the online world, particularly in protecting children from sexual abuse material – or at least using that as an excuse to erode privacy. The EU’s “Chat Control” proposal, for instance, has sparked significant debate due to its implications for privacy and encryption. Initially, the proposal mandated that tech companies implement client-side scanning (CSS) (that’s on-device scanning) to detect bad content before it is encrypted and sent, which many critics argue could lead to mass surveillance, weaken end-to-end encryption, and ultimately be used to monitor and detect whistleblowers, dissidents, or anyone else a government would want to surveil.

While Google’s new scam detection feature is focused on phone calls, it sets a precedent for the use of advanced AI in real-time content monitoring. Some tech companies have pushed back against such invasive anti-privacy proposals. But now that Google has decided to implement this to detect scams, the floodgates could open to other forms of on-device scanning.

Now that the technology exists and is being used anyway, this development could embolden the EU to extend similar requirements to other forms of communication, such as emails, chats, and social media interactions, under the guise of preventing various types of harmful content, including “misinformation,” “hate speech,” and other activities. Today it’s scams, tomorrow it’s dissent.

It hints at a future when it’s not just the cloud that people need to be concerned about when it comes to surveillance and the monitoring of speech, they’ll also have to contend with whether they ultimately have privacy on their devices themselves.

Privacy Concerns and the Slippery Slope

The potential for such technology to be expanded beyond its initial purpose raises significant privacy concerns. If the EU were to mandate the use of client-side scanning for a broader range of content, it could lead to a scenario where all forms of digital communication are subject to constant monitoring. This could effectively dismantle the privacy protections afforded by end-to-end encryption, as all messages would need to be scanned for illegal content before being encrypted.

Critics argue that this approach could create a surveillance state, where personal communications are continuously scrutinized by AI algorithms, as mandated by governments. The risk of false positives, where legitimate content is incorrectly flagged as harmful, is another serious concern. Such errors could ultimately lead to unwarranted scrutiny of innocent individuals and chilling effects on free speech where even your own device isn’t safe from dissent monitoring.

61
13

Australia’s House of Representatives has passed the national digital ID bill, which will embed the new online identification program into law.

The digital ID is supposed to replace the need for physical IDs, and is linked with government services such as MyGov, Centrelink, Medicare, and the Australian Tax Office (ATO).

The government has allocated $288.1 million (US$197 million) from the federal budget to roll out the program.

On May 16, the Digital ID Bill 2024 received support from the Labor government, the left-wing Greens, and “Teals,” but was opposed by the Liberal-National Opposition. It already passed the Senate in March.

Live minutes from the nation’s lower house showed 87 members of Parliament voting in favour of the Bill, with 56 members voting “no.”

A day earlier, the legislation was sent to the smaller Federation Chamber for debate, before landing back in the House for a final vote.

During the Chamber debate, Nationals MP Pat Conaghan said voters in his electorate were concerned about the digital ID. “They have concerns about privacy, government intervention and their freedoms. Just because I’m out the back of the bush in halls and in the corners of pubs doesn’t mean that it’s the tinfoil-hat brigade coming to see me to raise their concerns,” he said.

“These are mums and dads, grandmothers and grandfathers, business owners and farmers. Every day people out there are coming to talk to me about their concerns.”

However, Labor MP Graham Perrett argued the legislation was overdue, secure, and “fully voluntary.”

“If you have a digital ID, you have a secure and convenient way to verify your identity when using online services. A digital ID takes the place of identifying yourself via traditional methods such as your birth certificate, your passport or your driver’s licence,” he said.

The Labor government has maintained that the digital ID would be a convenient and voluntary way to verify identities online without having to repeatedly share sensitive documents.

Private businesses will be able to join Australia’s digital ID system within two years of the law being enacted, according to the government.

Individuals Could be Forced onto the Program: Senator

Previously, Liberal Senator Alex Antic raised concerns the Digital ID Bill would not remain voluntary, pointing to exceptions outlined in the Bill. The example given (pdf) states that, “Jacob wants to open a bank account with ABC Bank but he does not wish to use his digital ID to do so.

“Because Jacob can verify his identity by going to his nearest branch instead, ABC Bank does not contravene subsection (1).”

Senator Antic said this would compel many people to set up a digital ID because physical bank branches are now closing at a quick rate.

Another area of concern for Mr. Antic was that a “digital ID regulator” could grant an exemption where they were “satisfied that it is appropriate to do so.”

“That’s hardly comforting. It’s simply up to the regulator, to determine whether making a digital ID mandatory is appropriate or not,” the senator said.

Meanwhile, One Nation Senator Malcolm Roberts has previously warned the digital ID could work in tandem with the Misinformation Bill, and digital currency, to create a “social credit” system akin to that used in China.

“What you can see here is a framework for a social credit system. Complete control of every citizen of Australia. Whether you like it or not,” Mr. Roberts said.

However, Labor Senator Katy Gallagher has maintained that the digital ID would be a safe way for Australians to operate online.

“Australians will be sharing less personal information, which is held by fewer organisations, that are subject to stronger regulation—reducing the chance of identity theft online,” she said.

“It’s the Albanese government that is delivering a scheme which is safe, voluntary and will protect Australians in an increasingly online world.”

In addition, the Digital ID (Transitional and Consequential Provisions) Bill 2023, which deals with matters arising from the principal Digital ID Bill 2024, also passed the House of Representatives on May 16.

62
16

The US Senate has passed the Federal Aviation Administration (FAA) reauthorization act, which enjoyed bipartisan support, with an overwhelming majority (88-4).

The legislation includes a push to introduce digital ID and digital or mobile driver’s licenses, and will be considered by the House this week – the final hurdle before, if approved, it gets signed by President Biden.

The section dealing with acceptance of digital IDs and driver’s licenses is buried and we found it on page 1,015 of the document.

We obtained a copy of the bill for you here.

It reads that the FAA administrator “shall take such actions as may be necessary to accept, in any instance where an individual is required to submit government-issued identification to the Administrator, a digital or mobile driver’s license or identification card issued to such individual by a state.”

While adopting the bill, the Senate left out an amendment drafted by Senator Jeff Merkley, meant to temporarily halt wider deployment of facial recognition tools at US airports.

The Democrat’s idea was to impose a moratorium on biometric surveillance proliferation by the Transportation Security Administration (TSA) at least over the next three years.

The reasoning behind the amendment was that the current usage of facial recognition technology lacks transparency and results in travelers being poorly, if at all, aware of their rights in this regard.

The Senate chose to ignore the amendment, which wasn’t even put up for a vote, despite it making what appears to be a reasonable demand to ensure people can make informed decisions about participation in the schemes – namely, provide “simple and clear signage, spoken announcements, or other accessible notifications” about the ability to opt-out.

Yet this is something US Travel Association CEO Geoff Freeman dismissed as “dangerous and costly.” And Freeman doesn’t measure cost here in terms of safety and privacy of personal data, but “traveler hours a year” that would allegedly be added to wait times if travelers’ rights were better protected.

But Merkley noted in a post on X, responding to Freeman’s statement, that this is not true even according to the TSA, whose site states that people opting out of facial recognition does not add to wait times.

Once signed into law, the reauthorized act would provide the FAA with some $105 billion and the National Transportation Safety Board with $738 million to carry out various “safety and technology upgrades.”

63
7

The Biden White House has come up with an updated version of the US National Cybersecurity Strategy Implementation Plan (NCSIP), that, unlike the first, addresses the issue and commits to “supporting development of a digital ID ecosystem.”

We obtained a copy of the report for you here.

That initiative is included in the document as one of the strategic objectives, the stated goal being to advance research and guidance “that supports innovation in the digital identity ecosystem through public and private collaboration.”

The National Institute of Standards and Technology (NIST) has been entrusted with doing that work. Listed as contributing entities are the Department of Homeland Security (DHS) and the General Services Administration (GSA).

Some observers (such as Jeremy Grant – who under the Obama administration established NCSIP), would like things around the introduction of digital ID to move faster, and see this as status quo – NIST has already been doing the same “for years,” he commented, while the only news here is that this work has been officially mentioned in the new NCSIP.

And what NIST has been doing so far is digital identity research based on Creating Helpful Incentives to Produce Semiconductors (CHIPS) and the Science Act.

And here’s what NIST’s (continued) activities entail: publishing digital identity guidelines, evaluating facial recognition and analysis technology, and publishing considerations for Attribute Validation Services.

Meanwhile, the government promises to “encourage and enable” investments in identity solutions, with another promise being that these solutions will be secure, accessible, and interoperable while fostering all the good things like consumer privacy, economic growth, and “financial and social inclusion.”

The way it is promoted, the US government’s plans around digital IDs are not much different from how they are defended elsewhere in the world; those opposed to the schemes, however, cite a range of issues linked to the digital centralization of people’s identity such as large-scale data breaches, identity theft, as well as major privacy concerns.

The first version of NCSIP that Biden signed last spring had Q2 of 2024 as the deadline to complete 36 initiatives (33 have been completed to date).

The updated version contains 31 initiatives, including that on digital ID, with its completion date listed as Q2 of fiscal year 2025.

Overall, the current White House is pushing NCSIP as “a bold, affirmative vision for cyberspace to secure the full benefits of a safe and secure digital ecosystem for all Americans.”

Digital ID systems, which provide a way to verify an individual’s identity electronically, have sparked significant controversy from a civil liberties perspective for several reasons. Chief among these concerns is the issue of privacy. Digital IDs often involve the collection, storage, and sometimes sharing of sensitive personal data, including biometric data such as fingerprints or facial recognition scans. This raises fears about the potential for surveillance and data misuse by both government authorities and private entities. The centralization of personal data could make it a target for breaches and unauthorized access, leading to identity theft or misuse of information.

64
21

The Foreign Intelligence Surveillance Act (FISA, amended in 2008), as a whole and its Section 702 in particular have been a “gift that keeps on giving” where all manner of controversies are concerned.

In late April, it was time to once again reauthorize this legislation whose privacy safeguards have been routinely bypassed by law enforcement for years, and this did happen, with persistent major points of contention being warrantless access to data belonging to Americans (and respect for their constitutional rights).

The issue this time surfaced in a provision that changed the definition of electronic communications service providers (ECSPs) – in terms of which companies fall under this category, that is, which providers are obligated to give the government access to communications.

As things stand, more US businesses than ever would have to provide access to phones, Wi-Fi routers, and other equipment.

This in April launched a debate in the Senate around the scope of surveillance authorities – opponents were saying it significantly expanded them – yet in the end, they failed to stop the reauthorization (“Reforming Intelligence and Securing America Act”).

The question was left open with a “pledge” that it would be revisited down the line, and now reports say that Congress is working to “fix” the problem through the Senate Intelligence Committee’s annual intelligence authorization bill.

The promise was originally made by Senator Mark Warner, who heads the committee when he assumed a major role last month in making sure the reauthorization bill passed without incorporating changes to the controversial provision.

The Democrat was now speaking at the RSA security conference when he repeated that promise, saying that work is being done toward fulfilling it, and adding that he is “absolutely committed to getting that fixed.”

However, Warner was rather vague about what the solution might entail, saying only that the committee is “very much in progress” and reassuring his audience that it won’t be difficult to “fix” the problem.

“I don’t think it is a high hurdle” – he said, referring to addressing the secret surveillance expansion in the upcoming intelligence bill.

When the extension bill passed, opponents were warned that the government had gained even more power in this space, while spy agencies would continue to evade accountability.

65
7

The EU’s new digital ID rules, the Digital Identity Regulation (eIDAS 2.0), are about to come into force on May 20, mandating compliance from Big Tech and member countries in supporting the EU Digital Identity (EUDI) Wallet.

However, work is not complete on the EUDI Wallet, as several pilots are planned for 2025 to consolidate the process of the implementation of the rules.

According to the framework, the European Council passed recently, which has now been officially published, the deadline for the digital ID wallet to be recognized and made available is 2026. For now, it will be used in several scenarios, including accessing government services and age verification, reports note.

As things stand now, that deadline means that while the wallet scheme must become fully functional by that time, it will not be obligatory for citizens of the EU’s 27 members, and protection against discrimination is promised to those choosing not to opt in.

Getting a digital wallet issued, using it, or having it revoked will be free of charge, while the code powering the system will be “open source” – but with the caveat that countries will be able to “withhold certain information with reasonable justification.”

The regulation also aims to preserve website authentication certificate standards now in place and established in the industry, while “clarifying their scope.”

Some institutions across Europe appear more enthusiastic than others, and so the government of Spain’s Catalonia province has hailed the revised regulation as “a clear paradigm shift” that promotes standardization in the bloc, and one that allegedly gives users “greater autonomy over their personal data.”

When it comes to age verification, the new rules are overall seen as a positive development by the proponents, but they are not fully satisfied that EUDI Wallet will provide the ultimate solution.

Thus the euCONSENT NGO, set up to promote pan-European age verification utilizing eIDAS infrastructure, noted that implementing the wallet for this purpose will not be “convenient” due to the complexity of disclosing “an age attribute to each website separately.”

euCONSENT specifically remarked that even with EUDI Wallet, “alternatives” will have to be put in place for those children too young to have a wallet linked to their identity.

Digital IDs can also be used to control access to essential services, potentially manipulating social or political compliance. The extensive data collection involved can lead to profiling and discrimination. Furthermore, these IDs are susceptible to hacking and identity theft, placing individuals at risk of financial and reputation damage. Often, citizens are coerced into participating without genuine consent, and the lack of transparency and oversight in these systems increases the risk of misuse.

66
3

cross-posted from: https://sopuli.xyz/post/12515826

I’m looking for an email service that issues email addresses with an onion variant. E.g. so users can send a message with headers like this:

From: replyIfYouCan@hi3ftg6fgasaquw6c3itzif4lc2upj5fanccoctd5p7xrgrsq7wjnoqd.onion  
To: someoneElse@clearnet_addy.com

I wonder if any servers in the onionmail.info pool of providers can do this. Many of them have VMAT, which converts onion email addresses to clearnet addresses (not what I want). The docs are vague. They say how to enable VMAT (which is enabled by default anyway), and neglect to mention how to disable VMAT. Is it even possible to disable VMAT? Or is there a server which does not implement VMAT, which would send msgs to clearnet users that have onion FROM addresses?

67
11

The University of North Carolina (UNC) is moving to ban anonymous social apps, supposedly out of declarative concern for the students’ well-being.

The idea, the brainchild of the UNC System, will affect all 16 campuses under its control. But this is not an isolated case as other universities are reported to be looking into making similar decisions.

The UNC System chose dramatic language to justify the move, referring to the apps in question as somehow having “reckless disregard” for students (in terms of allowing “bad behavior and bullying”), with the organization’s president Peter Hans vowing to block “the most destructive ones.”

Another qualification found in a statement issued by the UNC Board of Governors equates anonymous apps to “scrawling cruel rumors on the bathroom wall.”

It isn’t at all clear when this decision, which critics might describe as recklessly destructive toward free speech, is going to be implemented. UNC is keeping quiet on such details despite repeated attempts by the College Fix to learn more about the upcoming scheme.

But in the previously drafted document, Hans revealed that “a handful of smaller, hyper-local platforms” will be the first for the chop, and clarified these included YikYak, Sidechat, Fizz, and Whisper.

While the apps are recognized as providing platforms for sharing memes and jokes, he accused them of also ignoring a whole gamut of societal ills: racism, sexual harassment, and drug dealing.

However, the Foundation for Individual Rights and Expression (FIRE) non-profit sees anonymous apps as valuable tools for students to express themselves without fear, as self-censorship has been on the rise in US universities in recent years.

According to FIRE’s Program Officer Jessie Appleby, blocking these apps is tantamount to “getting rid of that outlet for constructive speech just because of a small amount of offensive speech, and that’s generally not how you want to approach speech.”

But despite the fiery rhetoric coming from the UNC System, the plan is revealed to be effectively symbolic, since the blocking will cover only the campus wi-fi, meaning that students can turn to their mobile plans to continue using the apps.

The most significant result to come out of this, speech advocates are warning, is a public university with constitutional obligations to protect speech setting a dangerous precedent.

68
15

The Australian government’s decision to institute a pilot program testing an online age verification system digital ID system was overshadowed by a privacy scandal concerning a legal requirement for bars and clubs in the region.

The wrinkle juxtaposed these two narratives in a glaring light and shows how the push for digital ID raises privacy concerns that transcend the initial point-of-sale or point-of-access and becomes an ongoing data-invasive system that makes surveillance much easier.

In New South Wales (NSW), clubs must legally collate personal information from patrons upon entry under the state’s registered clubs legislation, a mandate echoing the proposed age verification and digital ID requirement for websites. The data gathered, meant to be safeguarded under federal privacy laws, has become the heart of recent concerns on privacy and data risks surrounding age verification as it has ended up getting leaked.

However, following hard on the heels of the government’s announcement of an online age verification system, the privacy of club-goers and bar attendees was threatened in a substantial data privacy issue.

There are now suspicions of a considerable data violation, involving personal data collected under law by these venues. An unauthorized platform has purportedly made accessible the personal data of over a million customers from at least 16 licensed NSW clubs, forcing cybercrime detectives into action.

The alleged data spill includes records and personal data of high-level government officials. Outabox, an IT service provider, stated it had been notified about the potential data breach involving a sign-in system used by its clients by an “unrestricted” third party.

Government representatives, in the face of this serious data breach, attempted to understate the magnitude of the incident. The Gaming Minister David Harris, in response to the crisis, clarified the incident wasn’t a hack as it stemmed from a data breach of a third-party vendor.

“We know that this is an alleged data breach of a third-party vendor, so it wasn’t a hack,” he said.

“There was a high-level meeting yesterday and the authorities, cybersecurity and police organizations are currently investigating that and when we get authorization we can give more information.”

But such an incident underscores precisely the apprehensions articulated about online age verification and digital ID mandates. It’s also underscored by the fact that the government wants to backdoor encrypted messaging, ending privacy for all. But as with all of this data surveillance, you can’t control who ultimately gets their hands on that data.

69
27

The New York state of mind, for once, may be winning – New Yorkers never really cared to drive. And with the new developments in the car industry, and its all but formal collusion with the constantly surveilling government – perhaps others might take some pointers?

(Of course, not driving or owning a car does not exempt you from surveillance or censorship, even if you’re just walking down the street – but owning and using one on a daily basis, if it’s a modern, “internet-connected car with hundreds of censors” – surely significantly lowers your chances of preserving your personal security and data integrity.)

In any case, it’s not looking good out there, as far as privacy and other civil rights are concerned, the way the automobile industry is going. Cars slowly turned from just machines to get people from A to B – into, “potential spying machines acting in ways drivers do not completely understand.”

That’s a big jump, on any “annoyance scale.”

In the US, this is still a hard pill to swallow, and so there are initiatives from certain lawmakers to capture the “angst” of the truth developing around cars, freedom, and autonomy, and capitalize on it among their expected voters.

And so, Democrat Senators Ron Wyden and Edward Markey wrote to the Federal Trade Commission, regarding car manufacturers sharing data with the police. Some of the arguments, however, were rather narrow – where the developments are affecting everybody.

We obtained a copy of the letter for you here.

But – wrote Wyden – “As far-right politicians escalate their war on women, I’m especially concerned about cars revealing people who cross state lines to obtain an abortion.”

Other instances include cases of stalking, etc – but why isn’t a person’s right to privacy any longer protected as a given, whether or not a violation may be suspected?

And now for the reality affecting everybody, at different points of their experience: Toyota, Nissan, Subaru, Volkswagen, BMW, Mazda, Mercedes-Benz, and Kia all have confirmed that they have tech embedded in their vehicles allowing them to turn over location data to US government based solely on a subpoena – that it, without a judge having to sign off on an approval.

Volkswagen is the “outlier” here, in that this company will do the same if the data is six days or less old – a subpoena will do. But an actual warrant will be needed to turn over data that spans data collected over a week, according to reports.

70
16

In a relentless bid to give some of the most authoritarian regimes in the world a run for their money where internet censorship is concerned, Australia’s government continues to come up with one dubious initiative after another.

Recently, there was an attempt to censor content globally (related to two stabbing attacks in Australia), and shortly after, the country’s intelligence chief Mike Burgess, and Federal Police Commissioner Reece Kershaw addressed the National Press Club, to launch yet another attack on encryption by urging compliance with encryption backdoors legislation.

Burgess chose to call this – “accountable encryption.”

It isn’t “accountable” right now because, while Australia has passed laws to essentially break encryption, those who are supposed to implement them, technology companies, are not cooperating.

“I am asking the tech companies to do more. I’m asking them to give effect to the existing powers and to uphold existing laws. Without their help in very limited and strictly controlled circumstances, encryption is unaccountable,” he said.

Burgess was careful to nestle his encryption backdoors plea among seemingly reasonable arguments, such as that encryption provides privacy and is “clearly a good thing” that “enables” transactions (he for some reason chose not to stress that it is in fact necessary for secure transactions).

But, the Australian spy chief went on, encryption also “creates safe spaces for violent extremists to operate, network and recruit.”

And it is their encrypted messages – and only theirs, governments around the world promise faithfully – that the authorities, as “good actors,” would like to be able to access communications at will.

However, what Burgess and his ilk choose not to take into account is that undermining encryption is a zero-sum game, where it becomes a matter of time before, once introduced, backdoors become available to “all actors” – including ordinary criminals, as well as governments, and other governments they don’t like.

Commissioner Reece Kershaw made a point of Australia not being alone in its aversion toward secure and private internet communications and transactions, quoting a recent statement made by 32 European police chiefs who complained that “the way” end-to-end encryption is deployed “undermines their ability to investigate crime.”

And Kershaw’s own “door is open,” he told the gathering, “to all relevant tech CEOs and chairmen, including Elon Musk and Mark Zuckerberg.”

But what would they discuss? “How to make our lives easier,” the Australian police chief confessed. Oh and – allegedly, also how to make our lives “safer.”

Kershaw also emphasized that his country already has the legal framework in place, but just needs tech companies to work with the authorities.

“If a judicial officer decides there is reasonable suspicion that a serious crime has been committed, and it is necessary for law enforcement to access information to investigate that serious crime, tech companies should respect the rule of law and the order of a court, or independent judicial authority, and provide that information,” he said.

71
10

Worldcoin, a digital ID project based on biometrics, namely, eyeball scanning, co-founded by OpenAI CEO Sam Altman, is eying (no pun) partnerships not only with OpenAI, but also PayPal, reports say.

However, these movements are not accompanied by any clarity for now, an example of this being another Worldcoin co-founder and its CEO Alex Blania refusing to make a direct announcement regarding the deal with OpenAI.

Blania at the same time confirmed that the company (specifically, Tools for Humanity, the main Worldcoin developer) is talking to PayPal – but the payments transactions giant is currently not commenting on any of this.

The general trend, albeit on a much smaller scale (despite the grandiose ambitions) seems to be the tried-and-tested Big Tech path of acquisitions or collaborations in a particular space in order to consolidate the grip on a market.

Reports note that Tools for Humanity previously started working with Okta, an identity and access management company, while just this April, it bought Ottr Finance, a startup developing digital wallets.

This is happening as Worldcoin is facing pushback from regulators in multiple countries around the world, who are mostly concerned about the enrollment standards (such as age verification) and data storage policies the controversial company has in place.

Worldcoin’s stated effort is to have “every person in the world” in its ID service, where the transactional nature of the thing is users giving up the sensitive biometric data contained in the irises of their eyes in exchange for what some might call “cryptocurrency change.”

The ultimate goal is to create the biggest “human identity and financial network” in the world, and the promise is, no surprise there – that this can and will be done while at the same time “preserving privacy.”

But it is precisely privacy fears that are underpinning the scrutiny over Worldcoin’s operations, and so its plans have been hitting some snags in places as far apart as Hong Kong and Spain, Malaysia, and Portugal.

However, Blania has shared that Worldcoin is taking a “proactive” approach in dealing with regulators, that is – it is hoping that compromising on some features will render the operation as a whole sustainable.

72
13

Yet another leaked document has emerged revealing details from the European Union policy regarding the implementation of the Child Sexual Abuse Regulation.

(That’s “chat control” – ultimately designed to allow for bulk scanning of all private communications under the guise of looking for illegal content.)

The latest leak, again first published by the Contexte website, is a working document from the current (Belgian) EU Presidency that concerns draft methodology and criteria for the risk categorization of services.

Things are moving fast over in Brussels, at least on this issue – the draft was published on April 10 to member delegations and was to be discussed as soon as five days later by the Law Enforcement Working Party.

But the direction things are moving, according to long-time “chat control” opponent and member of the European Parliament (MEP) Patrick Breyer, is towards “doubling down” on, by various means, control and/or suppress what he calls “services that allow people to protect themselves.”

Read the document here.

Those would be privacy-focused encrypted services, and messaging apps, which, under the methodology presented in the document, would receive lower risk scale scores if people can use them without an account, or with pseudonyms, VPNs, TOR, encryption, cryptocurrencies – in other words, in ways that make surveillance and tracking difficult or impossible.

That’s not something the EU likes at all, and so the plan is to be able (and even likely) to slap those with low scores with orders to scan all content.

However, those who are not focused on private chats but “predominantly engage in public communication” (i.e., are already open to surveillance and data collection, so that detection orders leading to full scanning are not really necessary) will receive better scores.

The EU’s logic here is consistent since those who do not harvest user data are automatically slated to have lower scores. Another thing the EU dislikes is decentralized content sharing (such as P2P-based platforms).

That’s because, as Breyer, a German lawyer and member of the Pirate Party remarks, P2P renders attempts at server-site scanning useless.

Such a methodology demonizes services like torrenting (P2P) platforms, TOR, ProtonMail, and the like, he said.

“This leaked paper reveals most EU government’s push to mass surveillance and undermining encryption on services essential to citizens, NGOs, lawyers, etc.,” the MEP stated.

“In contrast, the European Parliament’s approach would only permit the interception of conversations by people connected to child sexual abuse, while mandating many more safety-by-design measures than the Council only mentions in this paper without making them mandatory,” Breyer added and concluded:

“We Pirates will not stop fighting for our fundamental right to digital privacy of correspondence and secure encryption.”

73
32

A controversial executive order that would require U.S. cloud companies to more closely monitor the identities of their customers will move one step closer to the finish line next week amid opposition from the industry.

The White House’s proposed executive order is meant to address an increasingly serious and visible cybersecurity problem in which Chinese and Russian hackers rent U.S. cloud infrastructure space to carry out cyberattacks or scan for vulnerabilities, allowing them to hide in plain sight by acquiring a domestic IP address.

The threat is exacerbated by the fact that the National Security Agency is barred from monitoring American networks.

Cloud companies have vehemently opposed the proposed rule, pointing to the vast logistical and financial costs it would impose and arguing that sophisticated actors will be able to easily dupe cloud companies with fake identities, thereby rendering the effort meaningless. An industry comment period closes on Monday.

“The proposed identity verification requirements for IaaS [infrastructure as a service] providers and foreign resellers are overly burdensome, not sufficiently targeted, and risk advantaging foreign competitors,” the technology industry association NetChoice said in comments filed last week.

NetChoice, which represents two of the three largest cloud providers — Amazon and Google — also took the opportunity to knock their biggest competitor, Microsoft, saying the proposed rule would make the U.S. government even more dependent on the Seattle-based company than it already is.

“The government's dependence on Microsoft products raises serious concerns, as evidenced by the company's recent major security breaches,” the NetChoice comment said. “Diversifying technology providers and using the government's leverage to drive security improvements at Microsoft are essential.”

Supporters of the executive order say the change is vital and argue that the cloud companies need to be reined in, pointing to a report from the American Security Project last year which documented how Microsoft, Amazon and other cloud companies sell their products to the Chinese government and its military.

National security experts said the ubiquity of cloud-based services makes the executive order a no brainer.

“From a national security perspective, cloud-based services and utilities are literally the keys to the Kingdom these days,” said Paul Rosenzweig, a former Department of Homeland Security official who has since founded Red Branch Consulting, which focuses on national security issues. “We have so far migrated away from server based systems, isolated systems, that it's not even a debatable trend and it's only going to accelerate.”

Last month the Cyber Safety Review Board slammed Microsoft's security practices relating to a 2023 cloud-enabled intrusion which led to Chinese hackers infiltrating the emails of Commerce Secretary Gina Raimondo and U.S. Ambassador to China Nicholas Burns. The report included a series of recommendations for improving cloud security.

Rosenzweig said the Microsoft incident along with several others over the past 18 months have led him to conclude that adversaries like China and Russia take advantage of the U.S. in part through the cloud.

“It all comes down to vulnerabilities and we've just got to do something better,” he said.

74
39

The FCC on Monday fined four major US telcos almost $200 million for "illegally" selling subscribers' location information to data brokers.

AT&T, Verizon, Sprint, and T-Mobile US – the last two of which merged in 2020 – were ordered to pay $57 million, $47 million, $12 million, and $80 million respectively.

"Our communications providers have access to some of the most sensitive information about us," said FCC boss Jessica Rosenworcel in a statement.

"These carriers failed to protect the information entrusted to them. Here, we are talking about some of the most sensitive data in their possession: Customers’ real-time location information, revealing where they go and who they are."

Concerns about telecoms giants providing customer location data surfaced in 2018 when US Senator Ron Wyden (D-OR) asked Ajit Pai, then head of the FCC, to investigate claims that Securus Technologies bought real-time location data from major wireless carriers.

The FCC under Pai concluded in 2020 that the telcos had likely broken the law, but it wasn't clear what the consequences might be. Now it seems the bill has come due.

"No one who signed up for a cell plan thought they were giving permission for their phone company to sell a detailed record of their movements to anyone with a credit card," Senator Wyden said in a statement today. "I applaud the FCC for following through on my investigation and holding these companies accountable for putting customers’ lives and privacy at risk."

Another American watchdog, the FTC, recently started going after location data brokers. Privacy-oriented legislators in the House of Representatives have done so, too, proposing a bill to ban the US government from purchasing citizens' info from data brokers.

Nonetheless, other government agencies have allegedly been bypassing the Fourth Amendment's warrant requirement by buying phone records from the likes of AT&T.

Easy access to Americans' personal information, such as their location data, is not just a privacy concern – one that's more acute in the post-Dobbs era – but also a matter of national security. This was demonstrated by a recent Duke University study that found information on US military personnel and their families was available from data brokers for as little as $0.12 per record.

According to the FCC Enforcement Bureau, each of the four named carriers sold customer location data to data aggregators that subsequently resold the data to third-party location service companies. The bureau believes each of the four telcos “attempted to offload its obligations to obtain customer consent” to the downstream buyers of the data, a process that often meant valid consent was not obtained.

The FCC in its various forfeiture orders notes that the law "makes clear that carriers cannot disclaim their statutory obligations to protect their customers’ CPNI [customer proprietary network information] by delegating such obligations to third parties."

Carriers blame brokers

AT&T told The Register said it should not be blamed for the failure of those buying its data to obtain proper consent, and said it will fight the fine.

"The FCC order lacks both legal and factual merit," an AT&T spokesperson wrote in a statement sent to The Register. "It unfairly holds us responsible for another company’s violation of our contractual requirements to obtain consent, ignores the immediate steps we took to address that company’s failures, and perversely punishes us for supporting life-saving location services like emergency medical alerts and roadside assistance that the FCC itself previously encouraged.

"We expect to appeal the order after conducting a legal review."

AT&T added the program at issue was terminated in 2019.

Verizon also said the FCC had erred in its determination and that its location data program, also shut down five years ago, required affirmative, opt-in customer consent and was intended to support services like roadside assistance and medical alerts.

"Verizon is deeply committed to protecting customer privacy," Verizon spokesperson Rich Young told The Register. "In this case, when one bad actor gained unauthorized access to information relating to a very small number of customers, we quickly and proactively cut off the fraudster, shut down the program, and worked to ensure this couldn't happen again. Unfortunately, the FCC’s order gets it wrong on both the facts and the law, and we plan to appeal this decision."

T-Mobile US also said it planned to fight the fine.

"This industry-wide third-party aggregator location-based services program was discontinued more than five years ago after we took steps to ensure that critical services like roadside assistance, fraud protection and emergency response would not be disrupted," a T-Mo spokesperson told The Register.

"We take our responsibility to keep customer data secure very seriously and have always supported the FCC’s commitment to protecting consumers, but this decision is wrong, and the fine is excessive. We intend to challenge it."

In this case, "excessive" means the $92 million combined fine T-Mobile US has been ordered to pay would amount to about 1.1 percent of its 2023 net income of $8.3 billion.

75
69

Most major automakers share driver location data without a warrant or court order despite having publicly pledged not to do so, a congressional investigation released Tuesday revealed.

Only five of 14 queried auto manufacturers require a warrant or court order before giving law enforcement connected car owners’ location data, and only one alerts customers to law enforcement requests for their information, they found.

A fifteenth automaker, Volvo, did not respond to senators’ request for information, which was made through an unnamed auto industry association.

The automakers have all pledged through their primary industry association to protect car owners’ location data and to insist on warrants or court orders before giving law enforcement the data — a promise the senators called deceptive in a press release.

Sen. Ron Wyden (D-OR) made the inquiry, and Sen. Ed Markey (D-MA) has been a leader on the issue. They jointly sent a letter to the Federal Trade Commission (FTC) on Tuesday demanding an investigation.

Toyota, Nissan, Subaru, Volkswagen, BMW, Mazda, Mercedes-Benz and Kia all acknowledged that they only require subpoenas, which do not require a judge to sign off, before sharing the location data with government agencies, the senators said.

The senators noted that Volkswagen said it insists on a warrant for more than a week’s worth of location data.

“Automakers have not only kept consumers in the dark regarding their actual practices, but multiple companies misled consumers for over a decade by failing to honor the industry’s own voluntary privacy principles,” the letter said. “To that end, we urge the FTC to investigate these auto manufacturers’ deceptive claims as well as their harmful data retention practices.”

The global auto industry’s only visible association, the Alliance for Automotive Innovation, released a statement saying car manufacturers are “committed to protecting sensitive vehicle location information.”

“This is a complex issue,” the statement said. “Vehicle location information is only provided to law enforcement under specific and limited circumstances, such as when the automaker is provided a warrant or court order or in situations where there is an imminent threat of serious bodily harm or death to an individual.”

The statement appeared to rebut Wyden’s findings, which he said he obtained from auto manufacturers through the unnamed industry association. It is unclear why the statement from the Alliance for Automotive Innovation appeared to contradict automakers’ admissions to Wyden. A spokesman for the association did not reply to a request for comment on that point.

At a March event hosted by the Future of Privacy Forum, Hilary Cain — the association’s senior vice president for policy — repeatedly referred to the automakers’ voluntary principles in explaining the industry’s commitment to privacy, calling them “industry standards.”

According to the senators, in some cases the auto manufacturers store location data for a decade or more, while others are quick to erase it.

Mercedes-Benz told the senators the company “does not engage in the systematic collection of historical location data from the vehicle.” It said it only stores where a given vehicle has most recently parked and erases that data once a vehicle is moved.

But the senators said Hyundai acknowledged it “routinely” collects and retains vehicle location data for as many as 15 years, Toyota for as many as 10 years, and Honda for as many as seven years.

In 2014, a group of now-defunct auto industry associations wrote a letter to the FTC promoting their pledge that “requests or demand from governmental entities for geolocation information, must be in the form of a warrant or court order,” except in emergencies or with the consent of the vehicle owner.

The Automotive Alliance for Innovation has said it reviews and updates those pledges every two years and still makes the promise not to give law enforcement location data without a warrant or court order.

“These companies are not just less protective of their customers’ privacy,” the senators said. “Their policies directly contradict the public commitment the companies made and invited the FTC to enforce.”

Connected car data is uniquely vulnerable to law enforcement, as Recorded Future News reported in December. The contents of emails, private photos saved in the cloud, and mobile phones all require a warrant for law enforcement to access in keeping with Fourth Amendment protections against unreasonable searches and seizures.

“Consumers can only vote with their wallets when companies — or regulators — make such important product information available to the public,” the senators wrote to the FTC. “In this case, automakers have not only kept consumers in the dark regarding their actual practices, but multiple companies misled consumers for over a decade by failing to honor the industry’s own voluntary privacy principles.”

view more: ‹ prev next ›

Privacy

1 readers
10 users here now

Privacy is the ability for an individual or group to seclude themselves or information about themselves, and thereby express themselves selectively.

Rules

  1. Don't do unto others what you don't want done unto you.
  2. No Porn, Gore, or NSFW content. Instant Ban.
  3. No Spamming, Trolling or Unsolicited Ads. Instant Ban.
  4. Stay on topic in a community. Please reach out to an admin to create a new community.

founded 1 year ago
MODERATORS