After a few months of more or less diligent study I finished “consulting” three great resources for learning about algorithms: Algorithms Illuminated book series by Tim Roughgarden, Introduction to Algorithms, by Corben et al. and the Stanford course CS 168: The Modern Algorithmic Toolbox. I was not diligent enough to do the homeworks and solve all the proposed problems but I followed all the arguments and most proofs.

The most important thing one learns from such resources is how and where to look for a good algorithm if the need arises. But for me the most surprising thing have been the data structures. My formal computer science studies have stopped at lists (stacks and queues, not even dequeues). I have been surprised to learn about the multitudes of other very useful data structures. It seems that being able to find the right data structure for your problem is as useful as finding the right algorithm, and is the other most practical skill one can get from formal computer sciences.

Below I make a list of the most surprising (though sometimes obscure) data structures I encountered.

  1. Bloom filters

  2. HyperLogLog

  3. Y-fast trie

  4. Union-find

  5. Conflict free replicated data type

  6. Order statistic tree

  7. Interval tree

I also enjoyed this HN discussion where one can find tons of other data structures. As always there should be another side to the coin: these data structures seem to be designed to minimize memory size and/or algorithmic complexity. These days memory access could often turn out to be the bottleneck.