Link

funclojure:

Overview

This is a simple post on my views of how to simplify map reduce sequences for performance reasons.

Functional programming gives us a simple set of structures to write programs. These structures have mathematical properties such that logic is written as equations and as with all…

Link

A very interesting article on Map/Reduce patterns, totally worth a read, it’s over a year old, but I only came by it recently, so if you’re as delayed as I am, it might be worth posting here!

Text

Anonymous said: for your splaytree, what exactly does the getsubtree(key) do exactly?

Thank you for reading Anonymous! =)

You can see the implementation here:

https://github.com/pcalcao/datastructures/blob/master/trees/binary_search_tree.py

Since the underlying structure for the SplayTree is a BinarySearchTree, which is built as a recursive structure, getsubtree returns the node associated with the key (not just the value, but the actual node, basically a complete tree from that point down), for instance:

                    5
        3                      7

1            4            6             9

sorry for the crappy “art”, but getsubtree(3), would return the whole subtree from node with key 3, specifically:

        3
1            4

Let me know if you have trouble understanding something specific in the code, I’ll try to explain or make the code clearer if that’s the case.

Text

More Clojure - Tail Call Optimization and Loop/Recur

For those of you who read my previous post on Clojure, I decided to look deeper into some recursion methods, and look into the Clojure equivalent of Lisp’s Tail Call Optimization (TCO).

For those of you, like myself, who don’t have much of a functional programming background (I remember discussing TCO in college in some Prolog classes), let’s talk a bit about what it means and why it’s a big deal.

In functional programming (and sometimes in other paradigms), recursion comes across as a very elegant way of writing functions in a more “natural” form, given that a problem can be decomposed into one or more base cases and one or more sub-problems with a similar structure.

One good example of this is calculating the factorial of n, obtained by multiplying n * (n-1) * ... * 1. The base case is, obviously, 1, with result 1, otherwise, the result is n * factorial(n-1), basically the same problem on a “smaller” input.

Let’s see how it could be implemented trivially in Clojure:

(defn fact [n]
  (if (= n 1) 1 (*' n (fact (dec n)))))

Basically if n=1, 1, otherwise, n * fact(n - 1).

What’s that? You noticed I’m not using the * function, but instead *'? Good, I just wanted to see if you were reading carefully.

*' basically allows for arbitrary precision, transforming the result of the calculation into bigint if needed. The factorial function grows very rapidly, blowing up the int maximum size for relatively small inputs.

Even this naive approach is good enough to calculate numbers like fact(10000) which is… huge. But what if we really want to go big and try to calculate fact(100000)? Well, we get a nasty StackOverflowError.

Why is that? Well, for each recursive call, a new frame is added to our execution stack, until our input is = 1, and then the frames will start being evaluated, all the way down to the original call, which will return the value. So for fact(100000), we’re pushing 100000 frames to the stack even before we start doing any calculations.

The Clojuristic way of dealing with this and avoid stack consumption is to use loop and recur, which, since the JVM itself doesn’t provide support for TCO, allows the function to be rewritten iteratively. Our factorial would look something like this:

(defn fact_tco [n]
  (loop [cnt n acc 1]
    (if (zero? cnt) acc 
      (recur (dec cnt) (*' cnt acc)))))

A tad bit more complicated, but let’s see what’s going on here. In the body of the function, we use loop to specify the point to where the recursion is going to “jump”, and we provide an additional argument, an accumulator for the calculations done so far, initialized to 1.

Since we’re going to decrement cnt on each call, we test if we have reached zero, and if so, the result is the value stored in acc, otherwise, we recurse, redefining the parameters as cnt - 1 and the total accumulated so far with the current cnt.

Although this construct is slightly more complicated to grasp, compared to the very natural naive implementation, it is also extremely powerful. What happens is that now Clojure only needs to keep one frame of execution, basically evaluating the function, and jumping to the start again with redefined arguments. We can now call (fact_tco 100000) without fear of blowing up our execution stack, because you know… it might come in handy.

Remember the examples from last post where I implemented Fibonacci in a bunch of different ways? Well, here’s one more!

(defn looprecur_fibonacci [n]
  (loop [first 0 second 1 iter 0]
    (if (= iter n) first (recur second (+ first second) (inc iter)))))

Using the same technique as above, basically fixating the arguments in loop and using recur to jump to that point with new values. To figure out what needs to go in loop, I think of it as the context for the execution of each call, in this case, the two numbers we’re adding up, and how many terms of the sequence we’ve calculated already, so that we know when to stop.

I’m still far from mastering these constructs, but since writing about them is helpful for me, I might as well share my “findings”.

As you might have figured out from the several Fibonacci examples, the same end can be accomplished in several different ways, and it’s usually up to you to figure out what suits your particular use case the best.

The other way of using recursion in a non stack-consuming way in Clojure is by using the trampoline construct, but I’ll leave that for another post.

Just as a last example of Clojure’s flexibility, I’ll leave you with another example for calculating the factorial of 100000, you know, in case you need it:

(reduce *' (range 1 100001))

It’s hard to get more functional than this, don’t you think? Since range provides us with a lazy collection, not only we don’t consume the stack, we also don’t have to create the whole 100000 values in memory before we call reduce on it, which is pretty cool (for my particular definition of cool).

With all this variety, how do you choose what’s best for you? Well, I in particular have been using a healthy mix of the REPL, the time function to “benchmark” solutions, the Clojure docs and StackOverflow.

More on Clojure next time, or perhaps some stuff about graphs using Python… we’ll see. As usual, feel free to point out mistakes, make suggestions, offer me drinks, whatever.

Photo
algopop:

A Javascript moshpit simulator based on the academic paper Collective Motion of Moshers at Heavy Metal Concerts by Matt Bierbaum.
Text

A little bit of Clojure

So many languages, so little time.

I love trying out new languages and paradigms, I don’t always get to invest a lot of time in each one, but I still like getting the feel for it.

One thing I like is functional programming. Keep in mind I said like, opposed to “have a decent amount of knowledge about”.

A while ago a dabbled a bit in Scala, and got through Mr. Odersky great Functional Programming Principles in Scala, which I highly recommend.

Recently I decided to check out Clojure. Where Scala is basically a multi-paradigm language with an emphasis on functional programming, Clojure is basically a functional language compiled to the JVM. In Scala, it’s relatively easy to pic up the basics if you know Java, you figure out some basic differences and you can get stuff working, even though it takes time learning enough to make you take advantage out of the language besides diminished verbosity. Clojure on the other hand, is a LISP dialect. Yes, that’s right, parenthesis galore. I never programmed in a LISP-like language before, the closer I came to that was some OCaml ages ago in college, knowledge of which has all but washed away.

To give Clojure a whirl, I decided to implement the “Hello World” of functional languages, good ol’ Fibonacci.

So I started out with the most basic implementation possible, using recursion

(defn fib [x]
    (if (< x 2) x
    (+ (fib (dec (dec x))) (fib (dec x)))))

Quite elegant, don’t you think? Ok, ok, there are a lot of parenthesis, and this specific implementation will blow up our stack in no time, but still, it’s an elegant, albeit naive solution to the problem.

So let’s analyse what’s going on here. A function call in Clojure takes the form (function arg1 … argn), that’s important to understand this bit of code. So basically we’re defining a function named fib that for a given number x, if x &lt; 2 (yes, < is a function too, we’re just used to see it in the infix form, instead of the prefix one used in Clojure), the Fibonacci number of order x is x itself. Otherwise, it’s the sum of (+ function), of fib of x-2: (dec (dec x)) - decrement x twice, and fib of x-1 : (dec x).

This syntax is a bit weird for those of us not initiated in the brotherhood of LISP, but the basic rules are extremely simple, everything is a function, and all the function calls have the same form.

Ok, this is all fine and dandy, but like we said, it doesn’t work since as x increases, this trivial approach ends up making an exponentially larger number of recursive calls, most of them for branches that we already calculated.

There are usually two ways of solving this predicament, the first is sending recursion to hell and just implement an iterative version, but that’s the coward’s way out, so we’ll go with the second one: Memoization.

Memoization is basically a fancy term for keeping a cache that associates the value of an input to the value of the function called on that input. This way, when we call (fib 5), we check if our little cache has an entry for the argument 5, if it does, we return the value associated with it, otherwise, we calculate it, save it in the cache, and return.

Seems like a big change to our code? It’s a good thing this is such a common pattern to deal with recurring sub-branches of recursive calls, common enough that Clojure gives us a little hand:

(def fib_mem
  ( memoize
    ( fn [x]
      (if (< x 2) x
      (+ (fib_mem (dec (dec x))) (fib_mem (dec x)))))))

See what we did there? Basically the logic of our little program remains exactly the same. The only real change is that we wrapped our function definition with the function memoize.

Ok, I know it seems a bit different, we’re using def instead of defn, what’s the deal with that? And a fn appeared out of the blue, dahell?

Actually, the defn that we used before is simply syntactic sugar for:

(def function_name (fn [argument] ...))

We create an anonymous function with fn ..., and define the name function_name to refer to this function, defn basically simplifies the pattern of creating an anonymous function and giving it a name.

So, back to our memoized version, we have the same logic, the same simplicity, but now our implementation can take on very reasonable orders of Fibonacci, and do so quite quickly.

But we’re just getting started… one thing I really liked when learning Scala was it’s pattern matching capabilities, so I looked for a way of implementing this problem in Clojure using pattern matching:

(def fib_matching_mem
  (memoize
    (fn [n]
      (cond
        (< n 2) n
        :else (+ (fib_matching_mem (- n 1)) (fib_matching_mem (- n 2)))))))

We use memoization like before, since we’ve established a purely recursive function won’t cut it. Here we introduce the cond function, that basically accepts as arguments a set of test/expression pairs, evaluates each one in order, and calls the expression of the first one to evaluate to true, or :else if none does. We maintain the same logic as before, only with a different construct, that may be a bit overkill for such a trivial problem, but is still a very nice tool to keep in mind.

I think this is enough recursion for now, don’t you? How about we try an iterative version?

Let’s go with this one:

(defn fib_map [n]
  (last (take (+ n 1)
    (map first (iterate (fn [[a b]] [b (+ a b)]) [0 1])))))

Ok, give it a look, take a deep breath… seems complicated? Well, it is and it isn’t. We have a couple of new functions that we’re using, and in that sense, this implementation requires a bit more knowledge of the language and what it has to offer as builtins, on the other hand, the rules are the same, and that helps us interpret this code fairly simply, given that we can figure out what each function does. Let’s do the tour, from inside out (usually the best way to go about figuring Clojure code out).

First we have an anonymous function defined as: (fn [[a b]] [b (+ a b)]), basically, this accepts as argument a vector of at least two arguments, which are bound to the names a and b (via a mechanism called destructuring, I’ll get to that), and returns a new vector of two elements, [b (+ a b)]. For instance, applying this function to [0 1] will yield [1 1].

Then we have iterate, which accepts as first argument a function that accepts one argument, and as second argument an argument that matches the argument of the function passed as first argument - It’s ok if you want to read that again a couple of times… - and returns a lazy collection that is created by applying our anonymous function to iterate’s second argument, and then to the result of that call, iteratively, ad infinitum. For example, if we try to get 4 elements of out definition, we’ll get:

[[0 1] [1 1] [1 2] [2 3]]

Simpler now, isn’t it?

Now we have (map first (iterate (fn [[a b]] [b (+ a b)]) [0 1]))))), since we already know what the innermost part does, let’s ignore it and simplify to (map first collection). So, map is one of those weird functions, like iterate that takes functions and does something with them. In this case, map takes a function and a collection as arguments, and lazily applies the function passed as first param to each of the elements of the collection. In our code, we’re using it with the function first, which returns the first element in a collection, so let’s see what it does to our example from before (using … just to keep in mind that this collection is actually infinite and being evaluated lazily):

(map first [[0 1] [1 1] [1 2] [2 3] ...]) ---&gt; [0 1 1 2 ...]

Simple, right? That’s the advantage of the fact that everything is a function, and the rule being always the same, you can break complex code into small digestible chunks and figure out what they do, and then build on that.

But let’s carry on, simplified, we have: (take (+ n 1) collection). take simply does what it says, it takes a number of elements from a collection, in this case n+1, yielding a collection with the first n+1 elements of the result we’ve seen before, so if we do:

(take 4  (map first [[0 1] [1 1] [1 2] [2 3] ...])) 

We’ll get:

[0 1 1 2]

The last part is pretty self explanatory, simply return the last element in the collection. I’ll concede that this seems kind of a convoluted way to implement something so simple, but I think it’s a good example on the expressiveness of Clojure, and how apparently complex code can be deconstructed into small functional chunks that can be understood easily, and then composed back into the global solution.

As usual, I set up a little gist here: https://gist.github.com/pcalcao/ea4176719d778ea3ab9e with these examples, and yet another one, that I’ll talk about next time, since this is getting a bit lengthly as it is.

Overall, Clojure is a fun (fun, right? right? like, functional, get it?! hun, hun?) language, and I will surely find the time to delve a bit deeper into it. I believe that the recent re-appearance of functional languages as mainstream alternatives can be an indicator that in a couple of years, they’ll be playing a very important role in the language eco-system, and it’ll pay off having learned a thing or two about them (did I mention it’s fun already?), so take your pick of the lot, Clojure, Scala, Haskell, Scheme, OCaml, whatever, and learn you some functional programming for great good!

Link

Interesting article, may come in handy.

(Source: inercia)

Text

Next step… Splay Tree

In the last post, we transformed a Binary Tree into a Binary Search Tree, arguably a much more useful data structure, and we did that by applying some small changes to each method. We also changed some signatures so we could use Python’s dictionary interface and the result was pretty decent overall.

Now we’ll take it one step further. We’ll take our BST and use it to create a Splay Tree. This time, I won’t even cheat with signature changes and things like that, we’ll use composition and simply create a new class that decorates the interface of the BST we coded before.

For those who don’t want to read through the Wikipedia article, let me try and recap what a Splay Tree is, and why should we bother knowing how to build one.

A Splay Tree is basically a self-balancing Binary Search Tree. In each operation (retrieval, insertion, deletion), the internal structure is re-arranged to keep the most recently accessed nodes at the top of the tree. If we assume our tree to have a reasonably random access pattern, then the tree can be expected to remain balanced. On the other hand, just like a regular BST, it is possible for a Splay Tree to get so unbalanced that it becomes similar to a Linked List, with O(n) access times for a random element. We’ll get to that.

Let’s sum up:

Advantages

  • Most recently accessed elements closest to the root (nice for caches).
  • Expected balancing of the tree if we assume somewhat random access.
  • Easy to implement (you’ll see, piece of cake).
  • The fact the most recently accessed elements are close together and closer to the root, gives practical advantages due to Locality of Reference.

Disadvantages

  • No real guarantee of balancing (access all elements in increasing order, for instance, hell is let loose upon your tree).
  • Things can get complicated quickly if you have several threads sharing the same tree, since even reads change the structure.
  • Harder to use as an immutable data structure in a purely functional context, since each query would have to return both the value and the new version of the tree.

As you might have guessed by now, what we really need to implement is a splay function, that modifies the binary search tree to pull the latest accessed element to the top.

There are three base cases for splay, as you can see here.

Let’s assume we’re splaying element x:

The first base case is when p, the parent of x, is the root. This case is called zig. When this is the case, we switch x with its parent p, this means that if x is the left child, p is its new right child, x former right child is now p's right child and x left child stays the same. Otherwise, we just switch all of that.

The second case, zig-zig is done when both element x and its parent p are left children or right children. In this case we first rotate p with its parent g (x grandparent), and then we rotate x with p. Seems complicated? It’s not. Let’s generalize our definition of zig: swap the element with its parent, adjusting subtrees so that the binary search tree invariant holds true, which is to say, ignore the restriction that says that zig only happens when p is root. Now, with our more relaxed definition, zig-zig is just a matter of calling zig for the element p, and then calling it for the element x (zig and zig, makes sense, right?).

The third and last case, called zig-zag, happens when x is a left child and p is a right child, or the reverse. This step consists of rotating x with p, and then rotating x again with its new parent, formerly g. Again, its very simple if you just think of it in terms of basic rotations, we zig x, and then zig x again.

What does all this tells us? We implement zig, and the rest boils down to identifying each of the base cases. Does it seem simple? Good, it really is.

So to implement a Splay Tree, we need a splay method, that decides which of the base case should be called until the latest element is finally at the root of the tree.

Based on the key, we get the corresponding sub-tree, and analyze it to check whether we should zig, zig zig or zig zag.

    def __splay(self, key):
        subtree = self._innertree._getsubtree(key)
        if subtree:
            while not subtree.parent is None:
                if self.__should_zig(subtree):
                    # parent is root
                    self.__zig(subtree)
                elif self.__should_zigzig(subtree):
                    # both element and parent are on the same subtree
                    self.__zigzig(subtree)
                else:
                    # element is one subtree, parent is in another
                    self.__zigzag(subtree)
            self._innertree = subtree

While the element we are applying the splay to isn’t at the top of the tree, we continue applying the correct step.

As you’ve noticed, I didn’t want to get the code too jammed up in that function, and want it to be reasonably readable, so I declared a bunch of helper methods to make the syntax more natural, like __should_zigzig, etc:

     def __should_zig(self, subtree):
        return subtree.parent.parent is None

    def __should_zigzig(self, subtree):
        return (self.__bothleft(subtree, subtree.parent) or 
                   self.__bothright(subtree, subtree.parent))

    def __isleftchild(self, subtree):
        return subtree.parent.left == subtree

    def __isrightchild(self, subtree):
        return subtree.parent.right == subtree

    def __bothleft(self, subtree1, subtree2):
        return self.__isleftchild(subtree1) and self.__isleftchild(subtree2)

    def __bothright(self, subtree1, subtree2):
        return self.__isrightchild(subtree1) and self.__isrightchild(subtree2)

Now for the “hard” part, we implement our relatively generic zig method.

    def __zig(self, subtree):
        parent = subtree.parent
        gparent = subtree.parent.parent
        if gparent and self.__isleftchild(parent):
            gparent.left = subtree
        elif gparent and self.__isrightchild(parent):
            gparent.right = subtree

        # rotate to the right if node is left child
        if self.__isleftchild(subtree):
            rightside = subtree.right
            subtree.right = parent
            parent.parent = subtree
            parent.left = rightside
            subtree.parent = gparent
        # rotate to the left if node is right child
        elif self.__isrightchild(subtree):
            leftside = subtree.left
            subtree.left = parent
            parent.parent = subtree
            parent.right = leftside
            subtree.parent = gparent

Like you can see, it’s just a matter of switching branches around, nothing complicated, right?

We’re almost done.

The hard part was simply identifying zig-zig and zig-zag as functions that could simply be reduced to two consecutive zig operations. Goes to show that more time thinking saves a lot of time coding, I guess!


    def __zigzig(self, subtree):
        self.__zig(subtree.parent)
        self.__zig(subtree)

    def __zigzag(self, subtree):
        self.__zig(subtree)
        self.__zig(subtree)

And this is it… we’ve implemented all the functionality of a splay tree, or at least the core operations, we still need to call the underlying binary search tree on insert, delete, lookup…

    def __setitem__(self, key, value):
        self._innertree[key] = value
        self.__splay(key)

    def __getitem__(self, key):
        if self._innertree.key != key:
            # upgrade to root
            self.__splay(key)
        #we only bother in getting the value _if_ we splayed with success,   
        #otherwise the element doesn't exist
        if self._innertree.key == key:
            return self._innertree[key]

    def __delitem__(self, key):
        self.__splay(key)
        #we do this test for the same reason we do it in __getitem__
        if self._innertree.key == key:
            del self._innertree[key]

And this is it, this time, instead of a gist, I took the time to organize the code in a github project, it has all the code in the previous posts, this one, and some unit tests as well, you can find it here: https://github.com/pcalcao/datastructures

If you happen to come across this and have any suggestion, let me know, or just fork and send me the changes you think should be applied via a pull request. I’ll be quite happy to get some improvements in!

That’s all for now, we’ll see if we stick with trees or move on to something else on the next post… so much to write about, so little time. That being said, I do accept suggestions on topics to implement/rant about.

Thanks for reading! :)

Link

inercia:

A comprehensive list of libraries for uncommon data structures…

Text

Embrace the basics… BST

Ok, so I promised you all that the Binary Tree I showed on the last post could actually be transformed into something useful… I wasn’t even lying!

We take the basics of a Binary Tree, covered here: http://intothewebs.tumblr.com/post/40256328302/embrace-the-basics-binary-tree and we’ll make some changes to transform it into a Binary Search Tree. When we’re done, the changes might seem like we wrote everything from scratch, but you’ll see we actually maintained the main structure and implemented some trivial changes on top of what we already had.

We could implement the BST assuming all our elements are comparable, but we want to be a tad bit more flexible, so we’ll build it to associate a key with a value. The key will give us the structure of the tree, and the value will be saved in each node.

Let’s start with a small change to our constructor to make it save a key and value:

class BinarySearchTree():
    def __init__(self, val=None, key=None):
        self.value = val
        self.key = key
        self.left = None
        self.right = None
        self.parent = None

We basically set our default values for val and key, and that’s it, we now have a constructor for a BinarySearchTree.

In our previous implementation, we manually defined the left and right branches of the tree, but in a BST, the distribution of the keys needs to maintain the invariant that the key of every node in the left sub-tree is smaller than the root key, and the key of every node in the right sub-tree is larger than the root key. Since the BST is a recursive data structure, each sub-tree is itself a BST, and this invariant must hold.

So, we’ll provide a single insert method, and handle the details necessary to keep this invariant. It would be kind of lame to throw that responsability on the user, right?

    def insert(self, key, value):
        if not self.key:
            self.key = key
            self.value = value
        elif key < self.key:
            if self.left is None:
                self.left = BinarySearchTree(self)
            self.left.insert(key, value)
        elif key > self.key:
            if self.right is None:
                self.right = BinarySearchTree(self)
            self.right.insert(key, value)
        elif key == self.key:
            self.value = value

Is that it you ask? Yep, pretty simple right? If the root itself has no key/value, change it to the values passed as parameters, otherwise, figure out if it needs to go to the left or to the right, - create that node if needed - and call insert on it. If the key matches the current node’s key, simply update the value.

So now we can insert values in our tree… we just need a way to fetch a value given it’s key, right? Not that hard, let’s see:

    def get_item(self, key):
        if key == self.key:
            return self.value
        if key < self.key and self.left is not None:
            return self.left.get_item(key)
        if key > self.key and self.right is not None:
            return self.right.get_item(key)
        return None

We start at the root, and check if the key we’re looking for matches the root’s key, if it doesn’t and it’s lower, recursively search to the left, otherwise search to the right. If we hit a dead end, meaning, a leaf without finding the key, return None.

That’s the great thing about recursive data structures, we can do most operations with a relatively simple logic: understand how the recursion is built, create the base case, and figure out out to apply it to the sub-structure. In our Binary Search Tree example, the recursion is built on the relative ordering of the keys, so that’s what we play with.

In our Binary Tree example, we also created a method to traverse the tree in order, let’s do the same here. We’ll make it return, for each node, a tuple with (key,value), like this:

    def ordered_traverse(self):
        if self.left is not None:
            for (x, y) in self.left.ordered_traverse():
                yield (x, y)
        yield (self.key, self.value)
        if self.right is not None:
            for (x, y) in self.right.ordered_traverse():
                yield (x, y)

Yes, yes, I know, I took it a bit further and added that yield in there to turn our traversal into a generator expression. That’s just a little bonus, but the code is pretty simple, if there are elements to left, call ordered traverse on them, then return the root, then the result of ordered_traverse on the right sub-tree. Easy peasy.

Let’s call a simple test:

if __name__ == '__main__':
    tree = BinarySearchTree(10, 'a')
    tree.insert(7, 'b')
    tree.insert(4, 'c')
    tree.insert(8, 'd')
    tree.insert(14, 'aa')
    tree.insert(12, 'ba')
    tree.insert(15, 'ba')

    print [(k, v) for (k, v) in tree.ordered_traverse()]

That returns:

[(4, 'c'), (7, 'b'), (8, 'd'), (12, 'ba'), (14, 'aa'), (15, 'ba'), ('a', 10)]

So we basically created a data structure that allows for insertion and lookup of elements in average O(log(n)) time. Yes, yes, you can do that with any Hashtable in O(1), so is there any usage for a BST? A BinarySearchTree, contrary to a Map, has implicit ordering of it’s elements, so it allows for some nice features, the ordered traversal we implemented is an example of one. We could also easily implement search by proximity, for instance (return the value with the closest key to X), obtain values for a range of keys, and several other features, much more efficiently than with a Map/Hashtable. What’s the best data structure? It depends, always.

Well, you might say, we have some functionality akin to a Map/Hashtable/Dictionary, but the usage is not half as simple or pretty, we can’t do something like: tree[2] = ‘s’.

Can’t you say?! This is Python, of course you can! Let’s make some itty bitty changes to our insert and get methods:

    def __setitem__(self, key, value):
        if not self.key:
            self.key = key
            self.value = value
        elif key < self.key:
            if self.left is None:
                self.left = BinarySearchTree(self)
            self.left.__setitem__(key, value)
        elif key > self.key:
            if self.right is None:
                self.right = BinarySearchTree(self)
            self.right.__setitem__(key, value)
        elif key == self.key:
            self.value = value

    def __getitem__(self, key):
        if key == self.key:
            return self.value
        if key < self.key and self.left is not None:
            return self.left.__getitem__(key)
        if key > self.key and self.right is not None:
            return self.right.__getitem__(key)
        return None

We use the special methods __setitem__ and __getitem__, and we now use our tree like a python dictionary:

if __name__ == '__main__':
    tree = BinarySearchTree(10, 'a')
    tree[7] = 'b'
    tree[4] = 'c'
    tree[8] = 'test'
    tree[14] = 'aa'
    tree[12] = 'ba'
    tree[15] = 'ba'

    print tree[8]

    print [(k, v) for (k, v) in tree.ordered_traverse()]

Which prints:

test
[(4, 'c'), (7, 'b'), (8, 'test'), (12, 'ba'), (14, 'aa'), (15, 'ba'), ('a', 10)]
[Finished in 0.1s]

See what we did there?!

That’s it for today, gratz on those brave enough to stick with me until the very end! Next time we’ll take this Binary Search Tree and build something else. “But, but… that’s what you said last time!”. Yes, yes it was.

As usual, there’s a gist for this code, and a tad bit more, including delete, which we didn’t discuss here: https://gist.github.com/4660643

See you next time, be sure to send me any questions/corrections/feedback/whatever, since I probably missed something there.