1215
submitted 8 months ago by possiblylinux127@lemmy.zip to c/linux@lemmy.ml
top 50 comments
sorted by: hot top controversial new old
[-] Aatube@kbin.melroy.org 237 points 8 months ago

Don't forget all of this was discovered because ssh was running 0.5 seconds slower

[-] Steamymoomilk@sh.itjust.works 91 points 8 months ago

Its toooo much bloat. There must be malware XD linux users at there peak!

[-] rho50@lemmy.nz 96 points 8 months ago* (last edited 7 months ago)

Tbf 500ms latency on - IIRC - a loopback network connection in a test environment is a lot. It's not hugely surprising that a curious engineer dug into that.

[-] ryannathans@aussie.zone 40 points 8 months ago

Especially that it only took 300ms before and 800ms after

[-] Jolteon@lemmy.zip 80 points 8 months ago

Half a second is a really, really long time.

[-] lurch@sh.itjust.works 26 points 8 months ago

reminds of Data after the Borg Queen incident

load more comments (4 replies)
load more comments (1 replies)
[-] imsodin@infosec.pub 52 points 8 months ago

Technically that wasn't the initial entrypoint, paraphrasing from https://mastodon.social/@AndresFreundTec/112180406142695845 :

It started with ssh using unreasonably much cpu which interfered with benchmarks. Then profiling showed that cpu time being spent in lzma, without being attributable to anything. And he remembered earlier valgrind issues. These valgrind issues only came up because he set some build flag he doesn't even remember anymore why it is set. On top he ran all of this on debian unstable to catch (unrelated) issues early. Any of these factors missing, he wouldn't have caught it. All of this is so nuts.

[-] possiblylinux127@lemmy.zip 48 points 8 months ago

Postgres sort of saved the day

load more comments (2 replies)
[-] oce@jlai.lu 34 points 8 months ago

Is that from the Microsoft engineer or did he start from this observation?

[-] whereisk@lemmy.world 45 points 8 months ago

From what I read it was this observation that led him to investigate the cause. But this is the first time I read that he's employed by Microsoft.

load more comments (6 replies)
[-] merthyr1831@lemmy.world 124 points 8 months ago

I know this is being treated as a social engineering attack, but having unreadable binary blobs as part of your build/dev pipeline is fucking insane.

[-] suy@programming.dev 39 points 8 months ago

Is it, really? If the whole point of the library is dealing with binary files, how are you even going to have automated tests of the library?

The scary thing is that there is people still using autotools, or any other hyper-complicated build system in which this is easy to hide because who the hell cares about learning about Makefiles, autoconf, automake, M4 and shell scripting at once to compile a few C files. I think hiding this in any other build system would have been definitely harder. Check this mess:

  dnl Define somedir_c_make.
  [$1]_c_make=`printf '%s\n' "$[$1]_c" | sed -e "$gl_sed_escape_for_make_1" -e "$gl_sed_escape_for_make_2" | tr -d "$gl_tr_cr"`
  dnl Use the substituted somedir variable, when possible, so that the user
  dnl may adjust somedir a posteriori when there are no special characters.
  if test "$[$1]_c_make" = '\"'"${gl_final_[$1]}"'\"'; then
    [$1]_c_make='\"$([$1])\"'
  fi
  if test "x$gl_am_configmake" != "x"; then
    gl_[$1]_config='sed \"r\n\" $gl_am_configmake | eval $gl_path_map | $gl_[$1]_prefix -d 2>/dev/null'
  else
    gl_[$1]_config=''
  fi
[-] nxdefiant@startrek.website 25 points 8 months ago* (last edited 8 months ago)

It's not uncommon to keep example bad data around for regression to run against, and I imagine that's not the only example in a compression library, but I'd definitely consider that a level of testing above unittests, and would not include it in the main repo. Tests that verify behavior at run time, either when interacting with the user, integrating with other software or services, or after being packaged, belong elsewhere. In summary, this is lazy.

load more comments (6 replies)
[-] xlash123@sh.itjust.works 24 points 8 months ago

As mentioned, binary test files makes sense for this utility. In the future though, there should be expected to demonstrate how and why the binary files were constructed in this way, kinda like how encryption algorithms explain how they derived any arbitrary or magic numbers. This would bring more trust and transparency to these files without having to eliminate them.

load more comments (1 replies)
load more comments (1 replies)
[-] gregorum@lemm.ee 119 points 8 months ago

Thank you open source for the transparency.

[-] Cornelius_Wangenheim@lemmy.world 69 points 8 months ago
[-] just_another_person@lemmy.world 65 points 8 months ago

Shocking, but true.

load more comments (1 replies)
[-] d3Xt3r@lemmy.nz 99 points 8 months ago

This is informative, but unfortunately it doesn't explain how the actual payload works - how does it compromise SSH exactly?

[-] Aatube@kbin.melroy.org 47 points 8 months ago

It allows a patched SSH client to bypass SSH authentication and gain access to a compromised computer

[-] d3Xt3r@lemmy.nz 66 points 8 months ago* (last edited 8 months ago)

From what I've heard so far, it's NOT an authentication bypass, but a gated remote code execution.

There's some discussion on that here: https://bsky.app/profile/filippo.abyssdomain.expert/post/3kowjkx2njy2b

But it would be nice to have a similar digram like OP's to understand how exactly it does the RCE and implements the SSH backdoor. If we understand how, maybe we can take measures to prevent similar exploits in the future.

[-] underisk@lemmy.ml 27 points 8 months ago

I think ideas about prevention should be more concerned with the social engineering aspect of this attack. The code itself is certainly cleverly hidden, but any bad actor who gains the kind of access as Jia did could likely pull off something similar without duplicating their specific method or technique.

load more comments (2 replies)
load more comments (7 replies)
load more comments (2 replies)
[-] UnityDevice@startrek.website 97 points 8 months ago

If this was done by multiple people, I'm sure the person that designed this delivery mechanism is really annoyed with the person that made the sloppy payload, since that made it all get detected right away.

[-] fluxion@lemmy.world 33 points 8 months ago

I hope they are all extremely annoyed and frustrated

[-] acockworkorange@mander.xyz 24 points 8 months ago
load more comments (2 replies)
[-] bobburger@fedia.io 21 points 8 months ago

I like to imagine this was thought up by some ambitious product manager who enthusiastically pitched this idea during their first week on the job.

Then they carefully and meticulously implemented their plan over 3 years, always promising the executives it would be a huge pay off. Then the product manager saw the writing on the wall that this project was gonna fail. Then they bailed while they could and got a better position at a different company.

The new product manager overseeing this project didn't care about it at all. New PM said fuck it and shipped the exploit before it was ready so the team could focus their work on a new project that would make new PM look good.

The new project will be ready in just 6-12 months, and it is totally going to disrupt the industry!

[-] nxdefiant@startrek.website 26 points 8 months ago* (last edited 8 months ago)

I see a dark room of shady, hoody-wearing, code-projected-on-their-faces, typing-on-two-keyboards-at-once 90's movie style hackers. The tables are littered with empty energy drink cans and empty pill bottles.

A man walks in. Smoking a thin cigarette, covered in tattoos and dressed in the flashiest interpretation of "Yakuza Gangster" imaginable, he grunts with disgust and mutters something in Japanese as he throws the cigarette to the floor, grinding it into the carpet with his thousand dollar shoes.

Flipping on the lights with an angry flourish, he yells at the room to gather for standup.

load more comments (1 replies)
[-] refreeze@lemmy.world 80 points 8 months ago

I have been reading about this since the news broke and still can't fully wrap my head around how it works. What an impressive level of sophistication.

[-] rockSlayer@lemmy.world 80 points 8 months ago* (last edited 8 months ago)

And due to open source, it was still caught within a month. Nothing could ever convince me more than that how secure FOSS can be.

[-] lung@lemmy.world 95 points 8 months ago

Idk if that's the right takeaway, more like 'oh shit there's probably many of these long con contributors out there, and we just happened to catch this one because it was a little sloppy due to the 0.5s thing'

This shit got merged. Binary blobs and hex digit replacements. Into low level code that many things use. Just imagine how often there's no oversight at all

[-] rockSlayer@lemmy.world 49 points 8 months ago

Yes, and the moment this broke other project maintainers are working on finding exploits now. They read the same news we do and have those same concerns.

[-] lung@lemmy.world 22 points 8 months ago

Very generous to imagine that maintainers have so much time on their hands

load more comments (2 replies)
load more comments (1 replies)
[-] Quill7513@slrpnk.net 28 points 8 months ago

I was literally compiling this library a few nights ago and didn't catch shit. We caught this one but I'm sure there's a bunch of "bugs" we've squashes over the years long after they were introduced that were working just as intended like this one.

The real scary thing to me is the notion this was state sponsored and how many things like this might be hanging out in proprietary software for years on end.

load more comments (2 replies)
[-] uis@lemm.ee 69 points 8 months ago
[-] FatTony@lemm.ee 66 points 8 months ago
[-] alphafalcon@feddit.de 33 points 8 months ago

Coconut at least...

load more comments (1 replies)
[-] JoeKrogan@lemmy.world 48 points 8 months ago

I think going forward we need to look at packages with a single or few maintainers as target candidates. Especially if they are as widespread as this one was.

In addition I think security needs to be a higher priority too, no more patching fuzzers to allow that one program to compile. Fix the program.

I'd also love to see systems hardened by default.

[-] Potatos_are_not_friends@lemmy.world 40 points 8 months ago* (last edited 8 months ago)

In the words of the devs in that security email, and I'm paraphrasing -

"Lots of people giving next steps, not a lot people lending a hand."

I say this as a person not lending a hand. This stuff over my head and outside my industry knowledge and experience, even after I spent the whole weekend piecing everything together.

load more comments (1 replies)
[-] amju_wolf@pawb.social 31 points 8 months ago

Packages or dependencies with only one maintainer that are this popular have always been an issue, and not just a security one.

What happens when that person can't afford to or doesn't want to run the project anymore? What if they become malicious? What if they sell out? Etc.

load more comments (2 replies)
load more comments (6 replies)
[-] girlfreddy@lemmy.ca 46 points 8 months ago

A small blurb from The Guardian on why Andres Freund went looking in the first place.

So how was it spotted? A single Microsoft developer was annoyed that a system was running slowly. That’s it. The developer, Andres Freund, was trying to uncover why a system running a beta version of Debian, a Linux distribution, was lagging when making encrypted connections. That lag was all of half a second, for logins. That’s it: before, it took Freund 0.3s to login, and after, it took 0.8s. That annoyance was enough to cause him to break out the metaphorical spanner and pull his system apart to find the cause of the problem.

load more comments (1 replies)
[-] Pantherina@feddit.de 37 points 8 months ago
[-] index@sh.itjust.works 32 points 8 months ago

Give this guy a medal and a mastodon account

load more comments (4 replies)
[-] noddy@beehaw.org 31 points 8 months ago

The scary thing about this is thinking about potential undetected backdoors similar to this existing in the wild. Hopefully the lessons learned from the xz backdoor will help us to prevent similar backdoors in the future.

load more comments (6 replies)
[-] KillingTimeItself@lemmy.dbzer0.com 26 points 8 months ago

this was one hell of an april fools joke i tell you what.

load more comments (2 replies)
[-] luthis@lemmy.nz 21 points 8 months ago

I have heard multiple times from different sources that building from git source instead of using tarballs invalidates this exploit, but I do not understand how. Is anyone able to explain that?

If malicious code is in the source, and therefore in the tarball, what's the difference?

[-] Aatube@kbin.melroy.org 47 points 8 months ago

Because m4/build-to-host.m4, the entry point, is not in the git repo, but was included by the malicious maintainer into the tarballs.

load more comments (8 replies)
load more comments (8 replies)
load more comments
view more: next ›
this post was submitted on 01 Apr 2024
1215 points (99.2% liked)

Linux

48654 readers
592 users here now

From Wikipedia, the free encyclopedia

Linux is a family of open source Unix-like operating systems based on the Linux kernel, an operating system kernel first released on September 17, 1991 by Linus Torvalds. Linux is typically packaged in a Linux distribution (or distro for short).

Distributions include the Linux kernel and supporting system software and libraries, many of which are provided by the GNU Project. Many Linux distributions use the word "Linux" in their name, but the Free Software Foundation uses the name GNU/Linux to emphasize the importance of GNU software, causing some controversy.

Rules

Related Communities

Community icon by Alpár-Etele Méder, licensed under CC BY 3.0

founded 5 years ago
MODERATORS