556
Order (programming.dev)
you are viewing a single comment's thread
view the rest of the comments
[-] dan@lemm.ee 34 points 1 year ago* (last edited 1 year ago)

Lossless compression algorithms aren’t magical, they can’t make everything smaller (otherwise it would be possible to have two different bits of input data that compress to the same output). So they all make some data bigger and some data smaller, the trick is that the stuff they make smaller happens to match common patterns. Given truly random data, basically every lossless compression algorithm will make the data larger.

A good encryption algorithm will output data that’s effectively indistinguishable from randomness. It’s not the only consideration, but often the more random the output looks, the better the algorithm.

Put those two facts together and it’s pretty easy to see why you should compress first then encrypt.

[-] bastian_5@sh.itjust.works 7 points 1 year ago

And the fact that it can grow data means you should really put a test to make sure that the compressed data is actually smaller... I once had something refuse to allow me to upload a file that was well below their 8Mb file limit while it was claiming it was above the limit, and I'm assuming it was because they were testing the size after compression and that file grew from 6Mb to above the limit.

this post was submitted on 10 Aug 2023
556 points (97.9% liked)

Programmer Humor

19488 readers
765 users here now

Welcome to Programmer Humor!

This is a place where you can post jokes, memes, humor, etc. related to programming!

For sharing awful code theres also Programming Horror.

Rules

founded 1 year ago
MODERATORS