Write a hash program and provide input. Measure the programs performance. Use many input sets, generated randomly, to get average performance.

(1): Write first a simple linear hashing routine. Note insertion, find and unsuccessful search times as load factor grows.

(2): Implement double hashing. Compared to linear hashing we should get better performance as load factor grows. Does this really happen?

(3): Implement lazy deletion - just mark elements as deleted. The failure of this idea is that eventually everything in the table will either be occupied or previous occupied, now empty. So unsuccesful search will look through the entire table. This method will be fast for successful search, but unsuccesful search and insert will be terrible.

Verify this with a random mixture of inserts, sucessful finds, unsuccessful finds and deletes, trying to keep the table at a certain load factor.

(4): Help me show that my deletion method is good. In this method I keep a track of hash chains running through a location, and I can transition a slot to empty when the hash chain count goes to zero. I also move items found forward in the table, for the double effect of fast future finds on this element and reducing hash chain count on slots whereever possible.