Monday, May 7, 2012

Need memory efficient way to store tons of strings (was: HAT-Trie implementation in java)


I am working with a large set (5-20 million) of String keys (average length 10 chars) which I need to store in an in memory data structure that supports the following operation in constant time or near constant time:




// Returns true if the input is present in the container, false otherwise
public boolean contains(String input)



Java's Hashmap is proving to be more than satisfactory as far as throughput is concerned but is taking up a lot of memory. I am looking for a solution that is memory efficient and still supports a throughput that is decent (comparable with or nearly as good as hashing).



I don't care about the insertion/deletion times. In my application, I will be performing only insertions (only at startup time) and will subsequently only be querying the data structure using the contains method for the life of the application.



I read that the HAT-Trie data structure is closest for my needs. I am wondering if there is a library that has an implementation.



Other suggestions with pointers to implementations welcome.



Thank You.


Source: Tips4all

4 comments:

  1. The trie seems like a very good idea for your constraints.

    A "thinking outside the box" alternative:

    If you can afford some probability of answering "present" for a string that is absent

    EDIT: if you can afford false positives, use a Bloom filter as suggested by WizardOfOdds in the comments.

    For k=1, a Bloom filter is like a hash table without the keys: each "bucket" is simply a boolean that tells if at least one input with the same hash was present. If 1% false positives is acceptable, your hash table can be as small as about 100 * 20 million bits or roughly 200 MiB. For 1 in 1000 false positives, 2GiB.

    Using several hash functions instead of one can improve the false positive rate for the same amount of bits.

    ReplyDelete
  2. For space efficiency, O(log(n)) lookup, and simple code, try binary search over an array of characters. 20 million keys of average length 10 makes 200 million characters: 400MB if you need 2 bytes/char; 200MB if you can get away with 1. On top of this you need to somehow represent the boundaries between the keys in the array. If you can reserve a separator character, that's one way; otherwise you might use a parallel array of int offsets.

    The simplest variant would use an array of Strings, at a high space cost from per-object overhead. It ought to still beat a hashtable in space efficiency, though not as impressively.

    ReplyDelete
  3. Google brings up a blog post on HAT tries in Java. But I don't see how this will solve your problem directly: the structure is a shallow trie over prefixes of the keys, with the leaves being hashtables holding the suffixes of all keys with the given prefix. So in total, you have a lot of hashtables storing all of the keys that are in your current one big hashtable (perhaps saving a few bytes per key overall because of the common prefixes). Either way, you need a more space-efficient hashtable than the default Java one, or the per-object overhead will hit you just as badly. So why not start with a specialized hashtable class for string keys only, if you take this route, and worry about the trie part only if it still seems worthwhile then?

    ReplyDelete
  4. Similar to a trie is a ternary search tree, but a ternary search tree has the advantage of using less memory. You can read about ternary search trees here, here, and here. Also one of the main papers on the subject by Jon Bentley and Robert Sedgewick is here. It also talks about sorting strings quickly, so don't be put off by that.

    ReplyDelete