this post was submitted on 30 Mar 2026
10 points (100.0% liked)

Programming

26308 readers
650 users here now

Welcome to the main community in programming.dev! Feel free to post anything relating to programming here!

Cross posting is strongly encouraged in the instance. If you feel your post or another person's post makes sense in another community cross post into it.

Hope you enjoy the instance!

Rules

Rules

  • Follow the programming.dev instance rules
  • Keep content related to programming in some way
  • If you're posting long videos try to add in some form of tldr for those who don't want to watch videos

Wormhole

Follow the wormhole through a path of communities !webdev@programming.dev



founded 2 years ago
MODERATORS
 

I need to scan very large JSONL files efficiently and am considering a parallel grep-style approach over line-delimited text.

Would love to hear how you would design it.

you are viewing a single comment's thread
view the rest of the comments
[โ€“] Ephera@lemmy.ml 2 points 1 day ago (1 children)

I think, you could open the same file multiple times and then just skip ahead by some number of bytes before you start reading.

But yeah, no idea if this would actually be efficient. The bottleneck is likely still the hard drive and trying to fit multiple sections of the file into RAM might end up being worse than reading linearly...

[โ€“] Bazell@lemmy.zip 2 points 1 day ago* (last edited 1 day ago)

This approach will indeed hit bottleneck even in SSD. Thus, file can be read into RAM line by line in thread 0, then once specified amount of lines was gathered, schedule thread 1 to process them, while thread 0 still reads new lines. Once another chunk is ready, give to thread 2 and so on. This way you can start processing data in asynchronous regime in the fastest way possible. Slightly slower but more convenient approach is to firstly read all the file into RAM and only then assign parts of it to each thread at the same time.