this post was submitted on 30 Mar 2026
10 points (100.0% liked)
Programming
26331 readers
187 users here now
Welcome to the main community in programming.dev! Feel free to post anything relating to programming here!
Cross posting is strongly encouraged in the instance. If you feel your post or another person's post makes sense in another community cross post into it.
Hope you enjoy the instance!
Rules
Rules
- Follow the programming.dev instance rules
- Keep content related to programming in some way
- If you're posting long videos try to add in some form of tldr for those who don't want to watch videos
Wormhole
Follow the wormhole through a path of communities !webdev@programming.dev
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Can a file really be split efficiently? And is reading from multiple files on the same disk really faster than scanning a single file from top to bottom?
You don't actually need to "split" anything, you just read from different offsets per thread. Mmap might be the most efficient way to do this (or at least the easiest)
Whether or not that's going to run into hardware bottlenecks is a separate issue from designing a parallel algorithm. Idk what OP is trying to accomplish, but if their hardware is known (eg this is an internal tool meant to run in a data center), they'll need to read up on their hardware and virtualization architecture to squeeze the most IO performance.
But if parsing is actually the bottleneck, there's a lot you can do to optimize it in software. Simdjson would be a good place to start.
I think, you could open the same file multiple times and then just skip ahead by some number of bytes before you start reading.
But yeah, no idea if this would actually be efficient. The bottleneck is likely still the hard drive and trying to fit multiple sections of the file into RAM might end up being worse than reading linearly...
This approach will indeed hit bottleneck even in SSD. Thus, file can be read into RAM line by line in thread 0, then once specified amount of lines was gathered, schedule thread 1 to process them, while thread 0 still reads new lines. Once another chunk is ready, give to thread 2 and so on. This way you can start processing data in asynchronous regime in the fastest way possible. Slightly slower but more convenient approach is to firstly read all the file into RAM and only then assign parts of it to each thread at the same time.
If the task to just read the data quickly without processing it(doing calculations, sorting, transformation, etc.), then yes, reading line by line is the fastest way. But the OP mentioned some processing operations on data, which may require additional time and computing power, thus it will be efficient to firstly load file into ram splitting it into chunks, give each thread a chunk to process and then combine results.
In fact, my first comment suggested that you can read file line by line and once enough lines were read in RAM, thread 1 can start processing them while thread 0 still reads new lines from hard drive. Once another chunk is ready, thread 2 can start processing it and so on.
In conclusion, it all depends on what exactly you need to do with data. Simply transferring it from HDD to RAM must be done by reading line by line. But processing of data can be split among cores of CPU to maximize the speed of computations.
Don't have a thread doing line by line file reads, just to have it in memory. There is a piece of software optimized for tasks like this, the OS.
Just mmap your file and start processing.
Depends on programming language and built in methods that are being used. I described in a more fundamental way how it may work assuming, that OS itself will eventually use at least 1 thread to read a file. From my perspective of view, this will be our main body thread in which will be a cycle, that reads file line by line and gives ready chunks for other threads to process. As I described in other comment somewhere here, we can simplify this pipeline into firstly read the file into RAM splitting it into pieces. And only then process in parallel. I agree that second approach is more convenient one and easier to implement.