[-] orangeboats@lemmy.world 4 points 12 hours ago* (last edited 12 hours ago)

One of the issues at hand is that X11, the predecessor of Wayland, does not have a standardized way to tell applications what scale they should use. Applications on X11 get the scale from environment variables (completely bypassing X11), or from Xft.dpi, or by providing in-application settings, or they guess it using some unorthodox means, or simply don't scale at all. It's a huge mess overall.

It is one of the more-or-less fundamentally unfixable parts of the protocol, since it wants everything to be on the same coordinate space (i.e. 1 pixel is 1 pixel everywhere, which is... quite unsuitable for modern systems.)

Wayland does operate like how you say it and applications supporting Wayland will work properly in HiDPI environments.

However a lot of people and applications are still on X11 due to various reasons.

[-] orangeboats@lemmy.world 1 points 12 hours ago

LoDPI applications are either tiny sized or upscaled (= blurriness), aren't they?

[-] orangeboats@lemmy.world 1 points 19 hours ago

Yeah I get the display server part. What I meant was that 200% scaling gets you 1920x1080 logical resolution on HiDPI applications -- LoDPI applications continue to be blurry just as if you set your actual resolution to 1080p, but HiDPI applications will enjoy the enhanced visual acuity.

Even on smaller screens like the 14" ones, the quality of very high resolution (e.g. 4K) is still quite visible IMO, especially when it comes to text rendering. But it could very well just be my eyes.

[-] orangeboats@lemmy.world 10 points 1 day ago

It's not even Linux's fault. Plenty of apps support HiDPI on Linux.

It's the developers who still think that LoDPI-only is still acceptable when it's already 2024.

[-] orangeboats@lemmy.world 2 points 1 day ago

Isn't scaling to 200% the same as lowering the resolution to half? And you lose the high DPI for apps that support it too.

[-] orangeboats@lemmy.world 38 points 1 day ago

Agreed. HiDPI is the way to go and we should appreciate Framework for putting that in their laptops instead of continuing the use of shitty 1366x768 screens.

Xorg is the reason why OP is facing the scaling issues. OP, try to force the apps to run on native Wayland if they support it but don't default to it. The Wayland page on Arch wiki has instructions on that. Immensely improved my HiDPI experience.

[-] orangeboats@lemmy.world 3 points 6 days ago* (last edited 6 days ago)

...Why is there Dunkin Donuts inside a hospital?

[-] orangeboats@lemmy.world -2 points 6 days ago

Agencies that are still living in the 90s...

[-] orangeboats@lemmy.world 49 points 3 months ago

Entitled brat? What... Have you ever seen how GNOME developers respond to some bug reports and merge requests?

Since when has reporting bugs and contributing to the project become an entitlement?

[-] orangeboats@lemmy.world 84 points 4 months ago

Society in general encourages and rewards those who speak more, even if the things they speak have zero contribution or are absolute nonsense.

[-] orangeboats@lemmy.world 50 points 4 months ago* (last edited 4 months ago)

Not sure if it's still the case today, but back then cellular ISPs could tell you are tethering by looking at the TTL (time to live) value of your packets.

Basically, a packet starts with a TTL of 64 usually. After each hop (e.g. from your phone to the ISP's devices) the TTL is decremented, becoming 63, then 62, and so on. The main purpose of TTL is to prevent packets from lingering in the network forever, by dropping the packet if its TTL reaches zero. Most packets reach their destinations within 20 hops anyway, so a TTL of 64 is plenty enough.

Back to the topic. What happens when the ISP receives a packet with a TTL value less than expected, like 61 instead of 62? It realizes that your packet must have gone through an additional hop, for example when it hopped from your laptop onto your phone, hence the data must be tethered.

[-] orangeboats@lemmy.world 86 points 11 months ago

It's the fear of centralization, I believe (correct me if I'm wrong!).

Seeing that the whole point of federation is to decentralize the web, putting everything under the Cloudflare umbrella goes against this philosophy.

12
submitted 1 year ago* (last edited 1 year ago) by orangeboats@lemmy.world to c/rust@programming.dev

For context: I am trying to write a Rust wrapper over a C library.

Like many C libraries, most of its functions return an int. Positive return values are meaningful (provides information) and negative values are error codes.

To give an example, think of something like int get_items_from_record(const struct record *rec, struct item *items). A positive value indicates how many items were returned. -1 could mean ErrorA, -2 ErrorB, and so on.

Since this is Rust, I want to represent this kind of integer as Result<T, E>, e.g.:

enum LibError {
    A = -1,
    B = -2,
    // ....
}

// LibResult is ideally just represented as an integer.
type LibResult = Result<NonNegativeInteger, LibError>;

// Then I can pass LibResult values back to the C code as i32 trivially.

Is there a way/crate to do this?

view more: next ›

orangeboats

joined 1 year ago