enumerator4829

joined 5 months ago
[–] enumerator4829@sh.itjust.works 16 points 1 month ago (2 children)

VSCode is just Emacs with a weirder Lisp. (/s)

(You can tear my Emacs from my cold dead hands)

I also hate that warning, but it’s basically ”Can’t fit your text, with the font and properties you specified, into the box you specified without making it look like ass”

Easiest way to preserve formatting is to reword the text. Then again, would be nice if it didn’t happen all the time in my normal paragraphs as soon as I use a word with more than 10 characters…

[–] enumerator4829@sh.itjust.works 2 points 1 month ago (2 children)

Reword your text to fit.

[–] enumerator4829@sh.itjust.works 3 points 1 month ago* (last edited 1 month ago)

I’m quite fucking good at Linux. I’m fine with embracing open source, and I think Proton is the best thing ever.

I drew the line at audio, video and graphics on Linux, especially anything realtime.

I bought a MacBook for that. I feel dirty, but all my ”work” is done on remote Linux systems anyway, so my Mac just needs to provide an editor and a terminal emulator, and I can even make do with my editor over SSH given reasonable latencies. On the other hand, all my audio/video/graphics work flawlessly on MacOS, and that’s what I need locally.

[–] enumerator4829@sh.itjust.works 1 points 2 months ago

The thing is - wayland does kind of prevent it by forcing the GPU into the rendering pipeline far harder than Xorg. The GPU-assumptions throughout the code base(s) makes latency shoot through the roof when running software rendered. If you want decent latency, you need a GPU, and if you want to run multiuser you are going to pay Nvidia a shitton of money.

I can also imagine it’s hard (impossible?) to do performant damage tracking in a VNC server without implementing at least parts of the VNC server inside the compositor. This means that the compositor and VNC server gets tightly coupled by necessity. Choice will be limited. Would you like the bad DE with the good VNC server or the good DE with the bad VNC server? Bad damage tracking means shit latency and high bandwidth usage, or other tradeoffs. So even if someone managed to implement what I want on Wayland, it would most likely be limited to a single compositor and not a general solution allowing a free choice of compositor.

Best software suite I know of for it is Cendio Thinlinc, on top of TigerVNC. Free for up to 5 users. There are some others in the same niche. My recommendation would be to try Thinlinc on Rocky 9 or Ubuntu 24, and configure it to use XFCE. Mate, KDE, or Cinnamon, all work fine. Turn off compositing! Over a good WAN-link it feels mostly local unless playing fullscreen videos. On a LAN-link, the only thing giving it away is extra tearing and compression artifacts when playing youtube-videos fullscreen. Compared to many others solutions I have tried, the latency and ”immersion” is incredible.

As for me, I’ll try to never manage linux desktop fleets or remote desktops again.

[–] enumerator4829@sh.itjust.works 3 points 2 months ago (2 children)

What I’ve seen of rustdesk so far is that it’s absolutely not even close to the options available for X. It replaces TeamViewer, not thin clients.

You would need the following to get viability in my eyes:

  • Multiple users per server (~50 users)
  • Enterprise SSO authentication, working kerberos on desktop
  • Good and easily deployable native clients for Windows, Linux and Mac, plus html5 client
  • Performant headless software rendered desktops
  • GPU acceleration possible but not required
  • Clustering, HA control plane, load balancing
  • Configuration management available

This isn’t even an edge case. Current and upcoming regulations on information security drags the entire industry this way. Medical, research, defence, banking, basically every regulated landscape gets easier to work in when going down this route. Close to zero worries about endpoint security. Microsoft is working hard on this. It’s easy to do with X. And the best thing on Wayland is RustDesk? As stated earlier, these issues were brought up and discarded as FUD in 2008, and here we are.

Wayland isn’t a better replacement, after 15 years it’s still not a replacement. The Wayland implementations certainly haven’t been rushed, but the architecture was. At this point, fucking Arcan will be viable before Wayland.

[–] enumerator4829@sh.itjust.works 3 points 2 months ago (2 children)

Exactly my point. The issues people consider ”solved” with wayland today will be solved in production in 3-5 years.

People are still running RHEL 7, and Wayland in RHEL 9 isn’t that polished. In 4-5 years when RHEL 10 lands, it might start to be usable. Oh right, then we need another few years for vendors to port garbage software that’s absolutely mission critical and barely works on Xorg, sure as fuck won’t work in xwayland. I’m betting several large RHEL-clients will either remain on RHEL8 far past EOL or just switch to alternative distros.

Basically, Xorg might be dead, but in some (paying commercial) contexts, Wayland won’t be a viable option within the next 5-10 years.

[–] enumerator4829@sh.itjust.works -3 points 2 months ago

Yeah, the few thousand users I managed desktops for will remain on X for the next few years last I heard from my old colleagues.

Because of my points above

But good that your laptop works now and that I can help my grandma over teamviewer again.

[–] enumerator4829@sh.itjust.works 7 points 2 months ago

Please note that the nominal FLOP/s from both Nvidia and Huawei are kinda bullshit. What precision we run at greatly affect that number. Nvidias marketing nowadays refer to fp4 tensor operations. Traditionally, FLOP/s are measured with fp64 matrix-matrix multiplication. That’s a lot more bits per FLOP.

Also, that GPU-GPU bandwidth is kinda shit compared to Nvidias marketing numbers if I’m parsing correctly (NVLink is 18x 10GB/s links per GPU, big ’B’ in GB). I might read the numbers incorrectly, but anyway. How and if they manage multi-GPU cache coherency will be interesting to see. Nvidia and AMD both do (to varying degrees) have cache coherency in those settings. Developer experience matters…

Now, the real interesting thing is power draw, density and price. Power draw and price obviously influence TCO. On 7nm, I guess the power bill won’t be very fun to read, but that’s just a guess. The density influences network options - are DAC-cables viable at all, or is it (more expensive) optical all the way?

[–] enumerator4829@sh.itjust.works 6 points 2 months ago

There is actually less to ’xkill’. It nukes the X window from orbit in a very violent manner. The owning process(-tree) will usually just instantly curl up and die.

The main benefit is that it doesn’t actually kill the process, it only nukes the window. As such, you can get rid of windows belonging to otherwise unkillable processes (zombies, etc).

Also, it’s fun. Just don’t miss the window and accidentally kill your WM. (Beat that Wayland)

[–] enumerator4829@sh.itjust.works 14 points 2 months ago

Tony Stark was able to build his CA in a cave! With a bunch of dice!

[–] enumerator4829@sh.itjust.works 69 points 2 months ago (9 children)

This will be so much fun for people with legacy systems

view more: next ›