I'm also not sure if this is obscure, but Bloom Filters! It's a structure that you can add elements to then ask it if it has seen the element before with the answer being either "no" or "probably yes". There's a trade-off between confidence of a "probably yes", how many elements you expect to add, and how big the Bloom Filter is, but it's very space and time efficient. And it uses hash functions which always make for a fun time.
Programming
Welcome to the main community in programming.dev! Feel free to post anything relating to programming here!
Cross posting is strongly encouraged in the instance. If you feel your post or another person's post makes sense in another community cross post into it.
Hope you enjoy the instance!
Rules
Rules
- Follow the programming.dev instance rules
- Keep content related to programming in some way
- If you're posting long videos try to add in some form of tldr for those who don't want to watch videos
Wormhole
Follow the wormhole through a path of communities !webdev@programming.dev
Relevant xkcd
in Randall's words
Sometimes, you can tell Bloom filters are the wrong tool for the job, but when they're the right one you can never be sure.
Obscure 10 years ago maybe. These days there have been so many articles about them I bet they're more widely known than more useful and standard things like prefix trees (aka tries).
Conflict free replicated data types, I don’t know if I’d call them obscure but they’re definitely cool and less often used. They’re for shared state across computers, like in collaborative apps
They were pretty obscure until recently! I would say most people still don’t know about them.
From just the name my mind instantly thought of the conflict as "conflict diamonds", and I began to wonder what constitutes a conflict free boolean or integer.
If anyone wants to take a crack at writing up why primitives are unfortunate, and we should move on to new "conflict free data types"™ I will cheer you on!
Also, very interesting read about actual conflict free replicated days types. Cheers!
This sounds like document collaboration software like Google sheets where multiple people can edit a document at the same time
XOR lists are obscure and cursed but cool. And not useful on modern hardware as the CPU can't predict access patterns. They date from a time when every byte of memory counted and CPUs didn't have pipelines.
(In general, all linked lists or trees are terrible for performance on modern CPUs. Prefer vectors or btrees with large fanout factors. There are some niche use cases still for linked lists in for example kernels, but unless you know exactly what you are doing you shouldn't use linked data structures.)
EDIT: Fixed spelling
The CSR (compressed sparse row) format is a very simple but efficient way of storing sparse matrices, meaning matrices with a large amount of zero entries, which should not all occupy memory. It has three arrays: one holds all non-zero entries in order, read row by row, the next array contains the column indices of each non-zero element (and therefore has the same length as the first array), the third array indices into the first array for the first element of each row, so we can tell where a new row starts.
On sparse matrices it has optimal memory efficiency and fast lookups, the main downside is that adding or removing elements from the matrix requires shifting all three arrays, so it is mostly useful for immutable data.
Oh yeah that's a good one
And also, if you're representing a 0/1 matrix, you can just do away with the first column altogether.
IMO, circular buffers with two advancing pointers are an awesome data structure for high performance compute. They're used in virtualized network hardware (see virtio) and minimizing Linux syscalls (see io_uring). Each ring implements a single producer, single consumer queue, so two rings are usually used for bidirectional data transfer.
It's kinda obscure because the need for asynchronous-transfer queues doesn't show up that often unless dealing with hardware or crossing outside of a single CPU. But it's becoming relevant due to coprocessors (ie small ARM CPUs attached to a main CPU) that process offloaded requests and then quickly return the result when ready.
One cool trick that can be used with circular buffers is to use memory mapping to map the same block of memory to 2 consecutive virtual address blocks. That way you can read the entire contents of the buffer as if it was just a regular linear buffer with an offset.
Not necessarily obscure, but I don't think Tries get enough love.
Edit: I can't spell
I came up with a kind of clever data type for storing short strings in a fixed size struct so they can be stored on the stack or inline without any allocations.
It's always null-terminated so it can be passed directly as a C-style string, but it also stores the string length without using any additional data (Getting the length would normally have to iterate to find the end).
The trick is to store the number of unused bytes in the last character of the buffer. When the string is full, there are 0 unused bytes and the size byte overlaps the null terminator.
(Only works for strings < 256 chars excluding null byte)
Implementation in C++ here: https://github.com/frustra/strayphotons/blob/master/src/common/common/InlineString.hh
Edit: Since a couple people don't seem to understand the performance impact of this vs regular std::string, here's a demo: https://godbolt.org/z/34j7obnbs This generates 10000 strings like "Hello, World! 00001" via concatenation. The effect is huge in debug mode, but still has performance benefits with optimizations turned on:
With -O3 optimization
std::string: 0.949216ms
char[256] with strlen: 0.88104ms
char[256] without strlen: 0.684734ms
With no optimization:
std::string: 3.5501ms
char[256] with strlen: 0.885888ms
char[256] without strlen: 0.687733ms
(You may need to run it a few times to get sample numbers due to random server load on godbolt)
Changing the buffer size to 32 bytes has a negligible performance improvement over 256 bytes in this case, but might be slightly faster due to the whole string fitting in a cache line.
Interesting idea, but your trick is never really going to help (you can store up to 255 bytes instead of 254). Also always using 256 bytes for every string seems wasteful.
I think LLVM's small string optimisation is always going to be a better option: https://joellaity.com/2020/01/31/string.html
22 characters is significantly less useful than 255 characters. I use this for resource name keys, asset file paths, and a few other scenarios. The max size is configurable, so I know that nothing I am going to store is ever going to require heap allocations (really bad to be doing every frame in a game engine).
I developed this specifically after benchmarking a simpler version and noticed a significant amount of time being spent in strlen(), and it had real benefits in my case.
Admittedly just storing a struct with a static buffer and separate size would have worked pretty much the same and eliminated the 255 char limitation, but it was fun to build.
22 characters is significantly less useful than 255 characters.
You can still use more than 22 characters; it just switches to the heap.
nothing I am going to store is ever going to require heap allocations
I would put good money that using 256 bytes everywhere is going to be slower overall than just using the heap when you need more than 22 characters. 22 is quite a lot, especially for keys. ThisReallyLongKey is still only 17.
I don't use 256 bytes everywhere. I use a mix of 64, 128, and 256 byte strings depending on the specific use case.
In a hot path, having the data inline is much more important than saving a few hundred bytes. Cache efficiency plus eliminating heap allocations has huge performance benefits in a game engine that's running frames as fast as possible.
I came up with a kind of clever data type for storing short strings in a fixed size struct so they can be stored on the stack or inline without any allocations.
C++ already does that for short strings while seamlessly switching to allocation for long strings.
It's always null-terminated so it can be passed directly as a C-style string, but it also stores the string length without using any additional data (Getting the length would normally have to iterate to find the end).
Also the case in the standard library
The trick is to store the number of unused bytes in the last character of the buffer. When the string is full, there are 0 unused bytes and the size byte overlaps the null terminator.
Iirc, that trick was used in one implementation but discontinued because it was against the standard.
(Only works for strings < 256 chars excluding null byte)
If you need a niche for allocated string you can get to 254 but the typical choice seem to be around 16.
Maybe not that obscure, but Joe Celko’s Nested Set Model gave me exactly what I needed when I learned of it: fast queries on seldom-changing hierarchical database records.
Updates are heavy, but the reads are incredibly light.
I came here to mention these too. One addition that can be helpful in large trees is to add a depth attribute to each node so that you can easily limit the depth of subtree you retrieve.
Skew binary trees. They're an immutable data structure combining the performance characteristics of lists (O(1) non-amortized push/pop) and b-trees (log(N) lookup and updates)
They use a sequence of complete trees, cleverly arranged using skew binary numbers so that adding an element never causes cascading updates.
In practice they're superseded by relaxed radix balanced trees.
I get way more use out of Doubly Connected Edge Lists (DCEL) than I ever thought I would when I first learned about them in school.
When I want to render simple stuff to the screen, built-in functions like 'circle' or 'line' work. But for any shapes more complicated than that, I often find that it's useful to work with the data in DCEL form.
I personally don't think it's that obscure but I have never seen this used in production code that I didn't write: the linked hash map or ordered hash map.
I've used it twice that I can recall.
Merkle trees
Aren’t these one of the first structures you learn about in any comp sci course? Still good to know but not sure it’s obscure.
Not disagreeing with you, but I find it funny that this is the only data structure I have not heard of in this entire thread 🤣
Old tech is more like it. Good basics but you wouldn't code in ASM must of the time even if you learned it.
How about this variation of linked lists? https://www.data-structures-in-practice.com/intrusive-linked-lists/
Quite well known now because Rust-haters like to point out how they're awkward to use in Rust.
An ultimately doomed one that existed in Perl for a while was the pseudohash. They were regular integer-indexed arrays that could be accessed as though they were hashes (aka associative arrays / dictionaries). They even made it into the main Perl books at the time as this awesome time saving device. Except they weren't.
I did a quick web search just now and someone did a talk about why they weren't a great idea and they tell it better than I could; Link: https://perl.plover.com/classes/pseudohashes/
The supplied video doesn't have great sound quality, and it might be better to just click through the slides under Outline at the bottom there.
skiplists are interesting data structures. The underlying mechanism is it's a 2-dimensional probabilistic linked list with some associated height 'h' that enables skipping of nodes through key-value pairs. So, compared to a traditional linked list that uses a traversal method to search through all values stored. A skip list starts from the maxLevel/maxheight, determines if "next" points to a key greater than the key provided or a nullptr, and moves down to the level below it if it is. This reduces the time complexity from O(1) with a linked list to O(N) where N Is the maxLevel.
The reason behind why its probabilistic (in this case using a pseudo random number) is because its easier to insert and remove elements, otherwise (if you went with the idealized theoretical form) you would have to reconstruct the entire data structure each and every time you want to add/remove elements.
In my testing when adding 1,000,000 elements to a skiplist it reduced from 6s search with a linked list to less than 1s!
Fibonacci heaps are pretty cool. Not used very often b/c they're awful to implement, but better complexity than many other heaps.
Also Binary Lifting is closer to an algorithm than a data structure but it's used in Competitive Programming a fair bit, and isn't often taught: https://cp-algorithms.com/graph/lca_binary_lifting.html
And again closer to an algo tham a data structure, but Sum over Subsets DP in 3^n also has a cool little bit of math in it: https://cp-algorithms.com/algebra/all-submasks.html
Not really a data structure per say, but just knowing LISP and the interesting structures it uses internally.
The results of LISP operations CAR, CDR, CADR and the other one I can't remember now.
B trees are cool but not obscure necessarily. I didn't learn about them in college. It sounds like binary tree and it's similar but it's different. It's a data structure to take advantage of the way disk reads work.
I've been knee-deep in these lately so I'm a big fan
Theta sketches!
Do you want to approximately count a large volume of items, but save the state for later so you can UNION , INTERSECT and even DIFF them? Then Thetas are right for you!
Or basically anything in the Apache Datasketches lbrary.
Dewey decimal
Not exactly a datastructure alone, but bitslicing is a neat trick to turn some variable-time operations into constant-time operations. Used in cryptography for "substitution box" (S-box) operations, which can otherwise leak secrets via data-dependent timing variations.
The datastructure side of it is breaking up n words into bits and interleaving them within n variables (usually machine registers), so that the first variable contains the first bit from each word, the second variable the second bit, etc. It's also called "SIMD within a register".