67

A South Korean media outlet has alleged that local telco KT deliberately infected some customers with malware due to their excessive use of peer-to-peer (P2P) downloading tools.

The number of infected users of “web hard drives” – the South Korean term for the online storage services that allow uploading and sharing of content – has reportedly reached 600,000.

Malware designed to hide files was allegedly inserted into the Grid Program – the code that allows KT users to exchange data in a peer-to-peer method. The file exchange services subsequently stopped working, leading users to complain on bulletin boards.

The throttling shenanigans were reportedly ongoing for nearly five months, beginning in May 2020, and were carried out from inside one of KT's own datacenters.

The incident has reportedly drawn enough attention to warrant an investigation from the police, which have apparently searched KT's headquarters and datacenter, and seized evidence, in pursuit of evidence the telco violated South Korea’s Communications Secrets Protection Act (CSPA) and the Information and Communications Network Act (ICNA).

The CSPA aims to protect the privacy and confidentiality of communications while the ICNA addresses the use and security of information and communications networks.

The investigation has reportedly uncovered an entire team at KT dedicated to detecting and interfering with the file transfers, with some workers assigned to malware development, others distribution and operation, and wiretapping. Thirteen KT employees and partner employees have allegedly been identified and referred for potential prosecution.

The Register has reached out to KT to confirm the incident and will report back should a substantial reply materialize.

But according to local media, KT's position is that since the web hard drive P2P service itself is a malicious program, it has no choice but to control it.

P2P sites can burden networks, as can legitimate streaming - a phenomenon that saw South Korean telcos fight a bitter legal dispute with Netflix over who should foot the bill for network operation and construction costs.

A South Korean telco acting to curb inconvenient traffic is therefore not out of step with local mores. Distributing malware and deleting customer files are, however, not accepted practices as they raise ethical concerns about privacy and consent.

Of course, given files shared on P2P are notoriously targeted by malware distributors, perhaps KT the telco assumed its web hard drive users wouldn't notice a little extra virus here and there.

0

“All things are arranged in a certain order, and this order constitutes the form by which the universe resembles God.” - Dante, Paradiso

This post reveals the Tree of Life map of all levels of reality, proves that it is encoded in the inner form of the Tree of Life and demonstrates that the Sri Yantra, the Platonic solids and the disdyakis triacontahedron are equivalent representations of this map.

Consciousness is the greatest mystery still unexplained by science. This section presents mathematical evidence that consciousness is not a product of physical processes, whether quantum or not, but encompasses superphysical realities whose number and pattern are encoded in sacred geometries.

-15
submitted 1 week ago* (last edited 1 week ago) by c0mmando@links.hackliberty.org to c/occult@links.hackliberty.org

In this epic, all-day presentation, Mark Passio of What On Earth Is Happening exposes the origins of the two most devastating totalitarian ideologies of all time. Mark explains how both Nazism and Communism are but two masks on the same face of Dark Occultism, analyzing their similarities in both mindset and authoritarian methods of control. Mark also delves into the ways in which these insidious occult religions are still present, active and highly dangerous to freedom in the world today. This critical occult information is an indispensable component to any serious student of both world history and esoteric knowledge. Your world-view will be changed by this most recent addition to the Magnum Opus of Mark Passio.

23
submitted 1 week ago* (last edited 1 week ago) by c0mmando@links.hackliberty.org to c/privacy@links.hackliberty.org

The European Union (EU) has managed to unite politicians, app makers, privacy advocates, and whistleblowers in opposition to the bloc’s proposed encryption-breaking new rules, known as “chat control” (officially, CSAM (child sexual abuse material) Regulation).

Thursday was slated as the day for member countries’ governments, via their EU Council ambassadors, to vote on the bill that mandates automated searches of private communications on the part of platforms, and “forced opt-ins” from users.

However, reports on Thursday afternoon quoted unnamed EU officials as saying that “the required qualified majority would just not be met” – and that the vote was therefore canceled.

This comes after several countries, including Germany, signaled they would either oppose or abstain during the vote. The gist of the opposition to the bill long in the making is that it seeks to undermine end-to-end encryption to allow the EU to carry out indiscriminate mass surveillance of all users.

The justification here is that such drastic new measures are necessary to detect and remove CSAM from the internet – but this argument is rejected by opponents as a smokescreen for finally breaking encryption, and exposing citizens in the EU to unprecedented surveillance while stripping them of the vital technology guaranteeing online safety.

Some squarely security and privacy-focused apps like Signal and Threema said ahead of the vote that was expected today they would withdraw from the EU market if they had to include client-side scanning, i.e., automated monitoring.

WhatsApp hasn’t gone quite so far (yet) but Will Cathcart, who heads the app over at Meta, didn’t mince his words in a post on X when he wrote that what the EU is proposing – breaks encryption.

“It’s surveillance and it’s a dangerous path to go down,” Cathcart posted.

European Parliament (EP) member Patrick Breyer, who has been a vocal critic of the proposed rules, and also involved in negotiating them on behalf of the EP, on Wednesday issued a statement warning Europeans that if “chat control” is adopted – they would lose access to common secure messengers.

“Do you really want Europe to become the world leader in bugging our smartphones and requiring blanket surveillance of the chats of millions of law-abiding Europeans? The European Parliament is convinced that this Orwellian approach will betray children and victims by inevitably failing in court,” he stated.

“We call for truly effective child protection by mandating security by design, proactive crawling to clean the web, and removal of illegal content, none of which is contained in the Belgium proposal governments will vote on tomorrow (Thursday),” Breyer added.

And who better to assess the danger of online surveillance than the man who revealed its extraordinary scale, Edward Snowden?

“EU apparatchiks aim to sneak a terrifying mass surveillance measure into law despite UNIVERSAL public opposition (no thinking person wants this) by INVENTING A NEW WORD for it – ‘upload moderation’ – and hoping no one learns what it means until it’s too late. Stop them, Europe!,” Snowden wrote on X.

It appears that, at least for the moment, Europe has.

0

In a statement issued on the occasion of the “International Day for Countering Hate Speech,” UN Secretary-General Antonio Guterres called for the global eradication of so-called “hate speech,” which he described as inherently toxic and entirely intolerable.

The issue of censoring “hate speech” stirs significant controversy, primarily due to the nebulous and subjective nature of its definition. At the heart of the debate is a profound concern: whoever defines what constitutes hate speech essentially holds the power to determine the limits of free expression.

This power, wielded without stringent checks and balances, leads to excessive censorship and suppression of dissenting voices, which is antithetical to the principles of a democratic society.

Guterres highlighted the historic and ongoing damage caused by hate speech, citing devastating examples such as Nazi Germany, Rwanda, and Bosnia to suggest that speech leads to violence and even crimes against humanity.

“Hate speech is a marker of discrimination, abuse, violence, conflict, and even crimes against humanity. We have time and again seen this play out from Nazi Germany to Rwanda, Bosnia and beyond. There is no acceptable level of hate speech; we must all work to eradicate it completely,” Guterres said.

Guterres also noted what he suggested are the worrying rise of antisemitic and anti-Muslim sentiments, which are being propagated both online and by prominent figures.

Guterres argued that countries are legally bound by international law to combat incitement to hatred while simultaneously fostering diversity and mutual respect. He urged nations to uphold these legal commitments and to take action that both prevents hate speech and safeguards free expression.

The UN General Assembly marked June 18 as the “International Day for Countering Hate Speech” in 2021.

Guterres has long promoted online censorship, complaining about the issue of online “misinformation” several times, describing it as “grave” and suggesting the creation of an international code to tackle it.

His strategy involves a partnership among governments, tech giants, and civil society to curb the spread of “false” information on social media, despite risks to free speech.

20

Big Brother might be always “watching you” – but guess what, five (pairs) of eyes sound better than one. Especially when you’re a group of countries out to do mass surveillance across different jurisdictions, and incidentally or not, name yourself by picking one from the “dystopian baby names” list.

But then again, those “eyes” might be so many and so ambitious in their surveillance bid, that they end up criss-crossed, not serving their citizens well at all.

And so, the Five Eyes, (US, Canada, Australia, New Zealand, UK) – an intelligence alliance, brought together by (former) colonial and language ties that bind – has been collecting no less than 100 times more biometric data – including demographics and other information concerning non-citizens – over the last 3 years, since about 2011.

That’s according to reports, which basically tell you – if you’re a Five Eye national or visit out of the UN’s remaining 188 member countries – expect to be under thorough, including biometric, surveillance.

The program is (perhaps misleadingly?) known as the “Migration 5,” (‘Known to One, Known to All” is reportedly the slogan. It sounds cringe, but also, given the promise of the Five Eyes – turns out, other than sounding embarrassing, it actually is.)

And at least as far as the news now surfacing about it, it was none other than “junior partner” New Zealand that gave momentum to reports about the situation. The overall idea is to keep a close, including a biometric, eye on the cross-border movement within the Five Eye member countries.

How that works for the US, with its own liberal immigration policy, is anybody’s guess at this point. But it does seem like legitimate travelers, with legitimate citizenship outside – and even inside – the “Five Eyes” might get caught up in this particular net the most.

“Day after day, people lined up at the United States Consulate, anxiously waiting, clutching the myriad documents they need to work or study in America,” a report from New Zealand said.

“They’ve sent in their applications, given up their personal details, their social media handles, their photos, and evidence of their reason for visiting. They press their fingerprints on to a machine to be digitally recorded.”

The overall “data hunger” between the 5 of these post WW2 – now “criss-crossed” – eyes has been described as rising to 8 million biometric checks over the past years.

“The UK now says it may reach the point where it checks everyone it can with its Migration 5 partners,” says one report.

12

In the UK, a series of AI trials involving thousands of train passengers who were unwittingly subjected to emotion-detecting software raises profound privacy concerns. The technology, developed by Amazon and employed at various major train stations including London’s Euston and Waterloo, as well as Manchester Piccadilly, used artificial intelligence to scan faces and assess emotional states along with age and gender. Documents obtained by the civil liberties group Big Brother Watch through a freedom of information request unveiled these practices, which might soon influence advertising strategies.

Over the last two years, these trials, managed by Network Rail, implemented “smart” CCTV technology and older cameras linked to cloud-based systems to monitor a range of activities. These included detecting trespassing on train tracks, managing crowd sizes on platforms, and identifying antisocial behaviors such as shouting or smoking. The trials even monitored potential bike theft and other safety-related incidents.

The data derived from these systems could be utilized to enhance advertising revenues by gauging passenger satisfaction through their emotional states, captured when individuals crossed virtual tripwires near ticket barriers. Despite the extensive use of these technologies, the efficacy and ethical implications of emotion recognition are hotly debated. Critics, including AI researchers, argue the technology is unreliable and have called for its prohibition, supported by warnings from the UK’s data regulator, the Information Commissioner’s Office, about the immaturity of emotion analysis technologies.

According to Wired, Gregory Butler, CEO of Purple Transform, has mentioned discontinuing the emotion detection capability during the trials and affirmed that no images were stored while the system was active. Meanwhile, Network Rail has maintained that its surveillance efforts are in line with legal standards and are crucial for maintaining safety across the rail network. Yet, documents suggest that the accuracy and application of emotion analysis in real settings remain unvalidated, as noted in several reports from the stations.

Privacy advocates are particularly alarmed by the opaque nature and the potential for overreach in the use of AI in public spaces. Jake Hurfurt from Big Brother Watch has expressed significant concerns about the normalization of such invasive surveillance without adequate public discourse or oversight.

Jake Hurfurt, Head of Research & Investigations at Big Brother Watch, said: “Network Rail had no right to deploy discredited emotion recognition technology against unwitting commuters at some of Britain’s biggest stations, and I have submitted a complaint to the Information Commissioner about this trial.

“It is alarming that as a public body it decided to roll out a large scale trial of Amazon-made AI surveillance in several stations with no public awareness, especially when Network Rail mixed safety tech in with pseudoscientific tools and suggested the data could be given to advertisers.’

“Technology can have a role to play in making the railways safer, but there needs to be a robust public debate about the necessity and proportionality of tools used.

“AI-powered surveillance could put all our privacy at risk, especially if misused, and Network Rail’s disregard of those concerns shows a contempt for our rights.”

8

Big Tech coalition Digital Trust & Safety Partnership (DTSP), the UK’s regulator OFCOM, and the World Economic Forum (WEF) have come together to produce a report.

The three entities, each in their own way, are known for advocating for or carrying out speech restrictions and policies that can result in undermining privacy and security.

DTSP says it is there to “address harmful content” and makes sure online age verification (“age assurance”) is enforced, while OFCOM states its mission to be establishing “online safety.”

Now they have co-authored a WEF (WEF Global Coalition for Digital Safety) report – a white paper – that puts forward the idea of closer cooperation with law enforcement in order to more effectively “measure” what they consider to be online digital safety and reduce what they identify to be risks.

The importance of this is explained by the need to properly allocate funds and ensure compliance with regulations. Yet again, “balancing” this with privacy and transparency concerns is mentioned several times in the report almost as a throwaway platitude.

The report also proposes co-opting (even more) research institutions for the sake of monitoring data – as the document puts it, a “wide range of data sources.”

More proposals made in the paper would grant other entities access to this data, and there is a drive to develop and implement “targeted interventions.”

Under the “Impact Metrics” section, the paper states that these are necessary to turn “subjective user experiences into tangible, quantifiable data,” which is then supposed to allow for measuring “actual harm or positive impacts.”

To get there the proposal is to collaborate with experts as a way to understand “the experience of harm” – and that includes law enforcement and “independent” research groups, as well as advocacy groups for survivors.

Those, as well as law enforcement, are supposed to be engaged when “situations involving severe adverse effect and significant harm” are observed.

Meanwhile, the paper proposes collecting a wide range of data for the sake of performing these “measurements” – from platforms, researchers, and (no doubt select) civil society entities.

The report goes on to say it is crucial to make sure to find out best ways of collecting targeted data, “while avoiding privacy issues” (but doesn’t say how).

The resulting targeted interventions should be “harmonized globally.”

As for who should have access to this data, the paper states:

“Streamlining processes for data access and promoting partnerships between researchers and data custodians in a privacy-protecting way can enhance data availability for research purposes, leading to more robust and evidence-based approaches to measuring and addressing digital safety issues.”

9

These days, as the saying goes – you can’t swing a cat without hitting a “paper of record” giving prominent op-ed space to some current US administration official – and this is happening very close to the presidential election.

This time, the New York Times and US Surgeon General Vivek Murthy got together, with Murthy’s own slant on what opponents might see as another push to muzzle social media ahead of the November vote, under any pretext.

A pretext is, as per Murthy: new legislation that would “shield young people from online harassment, abuse and exploitation,” and there’s disinformation and such, of course.

Coming from Murthy, this is inevitably branded as “health disinformation.” But the way digital rights group EFF sees it – requiring “a surgeon general’s warning label on social media platforms, stating that social media is associated with significant mental health harms for adolescents” – is just unconstitutional.

Whenever minors are mentioned in this context, the obvious question is – how do platforms know somebody’s a minor? And that’s where the privacy and security nightmare known as age verification, or “assurance” comes in.

Critics think this is no more than a thinly veiled campaign to unmask internet users under what the authorities believe is the platitude that cannot be argued against – “thinking of the children.”

Yet in reality, while it can harm children, the overall target is everybody else. Basically – in a just and open internet, every adult who might think using this digital town square, and expressing an opinion, would not have to come with them producing a government-issued photo ID.

And, “nevermind” the fact that the same type of “advisory” is what is currently before the Supreme Court in the Murthy v. Missouri case, deliberating whether what no less than the First Amendment was violated in the alleged – prior – censorship collusion between the government and the Big Tech.

The White House is at this stage cautious to openly endorse the points Murthy made in the NYC think-piece, with a spokesperson, Karine Jean-Pierre, “neither confirming nor denying” anything.

“So I think that’s important that he’ll continue to do that work” – was the “nothing burger” of a reply Jean-Pierre offered when asked about the idea of “Murthy labels.”

But Murthy is – and really, the whole gang around the current administration and legacy media bending their way – now seems to be in the going for broke mode ahead of November.

1

If it looks like a duck… and in particular, quacks like a duck, it’s highly likely a duck. And so, even though the Stanford Internet Observatory is reportedly getting dissolved, the University of Washington’s Center for an Informed Public (CIP) continues its activities. But that’s not all.

CIP headed the pro-censorship coalitions the Election Integrity Partnership (EIP) and the Virality Project with the Stanford Internet Observatory, while the Stanford outfit was set up shortly before the 2020 vote with the goal of “researching misinformation.”

The groups led by both universities would publish their findings in real-time, no doubt, for maximum and immediate impact on voters. For some, what that impact may have been, or was meant to be, requires research and a study of its own. Many, on the other hand, are sure it targeted them.

So much so that the US House Judiciary Committee’s Weaponization Select Subcommittee established that EIP collaborated with federal officials and social platforms, in violation of free speech protections.

What has also been revealed is that CIP co-founder and leader is one Kate Starbird – who, as it turned out from ongoing censorship and speech-based legal cases, was once a secret adviser to Big Tech regarding “content moderation policies.”

Considering how that “moderation” was carried out, namely, how it morphed into unprecedented censorship, anyone involved should be considered discredited enough not to try the same this November.

However, even as SIO is shutting down, reports say those associated with its ideas intend to continue tackling what Starbird calls online rumors and disinformation. Moreover, she claims that this work has been ongoing “for over a decade” – apparently implying that these activities are not related to the two past, and one upcoming hotly contested elections.

And yet – “We are currently conducting and plan to continue our ‘rapid’ research — working to identify and rapidly communicate about emergent rumors — during the 2024 election,” Starbird is quoted as stating in an email.

Not only is Starbird not ready to stand down in her crusade against online speech, but reports don’t seem to be able to confirm that the Stanford group is actually getting disbanded, with some referring to the goings on as SIO “effectively” shutting down.

What might be happening is the Stanford Internet Observatory (CIP) becoming a part of Stanford’s Cyber Policy Center. Could the duck just be covering its tracks?

42

Delta Chat, a messaging application celebrated for its robust stance on privacy, has yet again rebuffed attempts by Russian authorities to access encryption keys and user data. This defiance is part of the app’s ongoing commitment to user privacy, which was articulated forcefully in a response from Holger Krekel, the CEO of the app’s developer.

On June 11, 2024, Russia’s Federal Service for Supervision of Communications, Information Technology, and Mass Media, known as Roskomnadzor, demanded that Delta Chat register as a messaging service within Russia and surrender access to user data and decryption keys. In response, Krekel conveyed that Delta Chat’s architecture inherently prevents the accumulation of user data—be it email addresses, messages, or decryption keys—because it allows users to independently select their email providers, thereby leaving no trail of communication within Delta Chat’s control.

The app, which operates on a decentralized platform utilizing existing email services, ensures that it stores no user data or encryption keys. Instead, it remains in the hands of the email provider and the users, safeguarded on their devices, making it technically unfeasible for Delta Chat to fulfill any government’s data requests.

Highlighting the ongoing global governmental challenges against end-to-end encryption, a practice vital to safeguarding digital privacy, Delta Chat outlined its inability to comply with such demands on its Mastodon account.

They noted that this pressure is not unique to Russia, but is part of a broader international effort by various governments, including those in the EU, the US, and the UK, to weaken the pillars of digital security.

25

The Internal Revenue Service (IRS) has come under fire for its decision to route Freedom of Information Act (FOIA) requests through a biometric identification system provided by ID.me. This arrangement requires users who wish to file requests online to undergo a digital identity verification process, which includes facial recognition technology.

Concerns have been raised about this method of identity verification, notably the privacy implications of handling sensitive biometric data. Although the IRS states that biometric data is deleted promptly—within 24 hours in cases of self-service and 30 days following video chat verifications—skeptics, including privacy advocates and some lawmakers, remain wary, particularly as they don’t believe people should have to subject themselves to such measures in the first place.

Criticism has particularly focused on the appropriateness of employing such technology for FOIA requests. Alex Howard, the director of the Digital Democracy Project, expressed significant reservations. He stated in an email to FedScoop, “While modernizing authentication systems for online portals is not inherently problematic, adding such a layer to exercising the right to request records under the FOIA is overreach at best and a violation of our fundamental human pure right to access information at worst, given the potential challenges doing so poses.”

Although it is still possible to submit FOIA requests through traditional methods like postal mail, fax, or in-person visits, and through the more neutral FOIA.gov, the IRS’s online system defaults to using ID.me, citing speed and efficiency.

An IRS spokesperson defended this method by highlighting that ID.me adheres to the National Institute of Standards and Technology (NIST) guidelines for credential authentication. They explained, “The sole purpose of ID.me is to act as a Credential Service Provider that authenticates a user interested in using the IRS FOIA Portal to submit a FOIA request and receive responsive documents. The data collected by ID.me has nothing to do with the processing of a FOIA request.”

Despite these assurances, the integration of ID.me’s system into the FOIA request process continues to stir controversy as the push for online digital ID verification is a growing and troubling trend for online access.

[-] c0mmando@links.hackliberty.org 13 points 1 week ago

this leads to you not being able to use the internet without associating it with your digital id

[-] c0mmando@links.hackliberty.org 12 points 3 months ago* (last edited 3 months ago)

and at the cost of consumer privacy

[-] c0mmando@links.hackliberty.org 13 points 5 months ago

when I was looking some of these people up, I was surprised how many billionaires came up...

In the 37th annual Forbes list of the world's billionaires, the list included 2,640 billionaires with a total net wealth of $12.2 trillion, down 28 members and $500 billion from 2022.

however, when considering that there are only ~2,600 billionaires in the world, I could see how these ultra rich only associate with each other.

[-] c0mmando@links.hackliberty.org 10 points 7 months ago

of course it will.. but downloading 150 TB is overkill if you want one book

[-] c0mmando@links.hackliberty.org 15 points 9 months ago

Pixel GrapheneOS gangggg

[-] c0mmando@links.hackliberty.org 10 points 10 months ago

annas-archive.org

[-] c0mmando@links.hackliberty.org 16 points 10 months ago

Love F-Droid but be aware of the risks and always try to use a developer repo when possible..

https://privsec.dev/posts/android/f-droid-security-issues/

[-] c0mmando@links.hackliberty.org 53 points 10 months ago

Normie's gonna normie. If we ain't talking over signal we ain't talking.

[-] c0mmando@links.hackliberty.org 14 points 10 months ago* (last edited 10 months ago)

From Riseup: “Due to Thanksgiving and other deadlines, our lawyers were not available to advise us on what we can and cannot say,” the collective member told me. “So in the interest of adopting a precautionary principle, we couldn’t say anything. Now that we have talked to [counsel], we can clearly say that since our beginning, and as of this writing, riseup has not received a NSL, a FISA order/directive, or any other national security order/directive, foreign or domestic.”

Intercept article: "And yet, when I asked if riseup had received any request for user data since August 16, the collective did not comment. Clearly, something happened, but riseup isn’t able to talk about it publicly. The riseup collective is currently having internal discussions about when it will be able to update its warrant canary."

view more: next ›

c0mmando

joined 1 year ago
MODERATOR OF