Trie or Set

Given a grid or input stream of characters, I would like to discover all words according to a given dictionary. This could be a dictionary of all English words or phrases (say, for an autocomplete service), or for any language. This is especially useful for languages where words are not clearly separated (e.g., Japanese, Chinese, Thai).

Typically, this is done with a Trie or a DAWG (Directed Acyclic Word Graph). A Trie can be implemented in Python using a nested dict, i.e.,

def _make_trie(wordict):
    trie = {}
    for word in wordict:
        current_trie = trie
        for letter in word:
            current_trie = current_trie.setdefault(letter, {})
        current_trie['$'] = '$'
    return trie

def _in_trie(trie, word):
    ''' True IFF prefix or word in trie
    '''
    current_trie = trie
    for letter in word:
        if letter in current_trie:
            current_trie = current_trie[letter]
        else:
            return False
    return True

Using this approach, we can scan through a large stream of characters for potential words. Imagine a classic matching game where you are looking for words within a grid of characters. Programmatically, you would scan through the grid with combinations of characters. The advantage of a Trie (or DAWG) is that it allows for efficient pruning. In other words, if a character combination is not in the Trie, then you can cease pruning.

An alternative approach is to create a Set of word prefixes, i.e.,

almost_words = set([])
for word in wordict:
    for i in range(len(word)-1):
        almost_words.add( word[0:i+1] )

If the dictionary contains [‘apple’, ‘are’] then the Set almost_words would contain the following,

{'a', 'ap', 'app', 'appl', 'ar'}

In other words, rather than test if a character string exists in the Trie, one can simply check the Set almost_words. If there is no match then that particular path can be pruned. Here is a simple RTL (right-to-left) character scanner that uses this approach:

def _setscan_rtl(grid, wordict):
    ''' generator yielding word candidates
    '''
    almost_words = set([])
    maxlen = 0
    for word in wordict:
        if len(word) > maxlen:
            maxlen = len(word)
        for i in range(len(word)-1):
            almost_words.add( word[0:i+1] )
    for line in grid:
        for i in range(max(len(line),maxlen) - maxlen):
            candidate_word = ''
            for c in range(min(len(line),maxlen)):
                candidate_word += line[i+c]
                if candidate_word not in almost_words:
                    break
                yield candidate_word

I created a simple test case to determine if a Set was truly faster, and whether or not it was as memory efficient. There was a noticeable increase in performance using Set over Trie (for both large and small data sets). Interestingly, the performance difference was even more pronounced when using Japanese characters, indicating that language parsers can use a simple Set (or hashmap) as opposed to a Trie or a DAWG.

$ /usr/bin/time ./test_j_set.py
50220
177.84user 0.42system 2:58.54elapsed 99%CPU (0avgtext+0avgdata 507412maxresident)k
0inputs+0outputs (0major+145801minor)pagefaults 0swaps

$ /usr/bin/time ./test_j_trie.py
50220
250.44user 0.56system 4:11.86elapsed 99%CPU (0avgtext+0avgdata 680960maxresident)k
0inputs+0outputs (0major+184571minor)pagefaults 0swaps

Full results and code are available on my github.