Work Note 6, DM803, fall 2016

Exercises October 10

  1. Consider an algorithm which needs a really large array, initialized to all zeros, in order to perform random memory access to a few entries (which are unknown before execution). In that case, we would prefer not to have to initialize such a large array in advance since we are only planning to use a few entries. Inspired by the timestamp verification procedure from the lecture, consider how to avoid this complete initialization by being able to verify if an entry has been set before it is used. It is possible to obtain this by using three arrays instead of just the one array, and we refer to this as virtual initialization. Hint: one of the arrays is used as a stack.
    This technique makes sense when initialization would ruin the complexity. In practical scenarios, some languages would insist on initializing. In those cases, it can still be usefull when a process is repeated many times. Then the arrays would be permanently allocated and could each time, after the first, be initialized in constant time.
  2. In [WT89], instead of letting find move references of nodes such that they point to the grandparent before the move (if it exists), what would happen if we move them to point to their great grandparents or great great grandparents (if they exist)?
    Let us refer to the parent of a node as its 1-ancestor, its grandparent as its 2-ancestor, etc. What would happen if we change pointers to point to their max(floor(log1000n), 2)-ancestor (if it exists)?
  3. In the lecture on [WT89], we aimed at an amortized time complexity of O(log n/loglog n). In the arguments, other complexities showed up, and the final result depended on these other complexities belonging to O(log n/loglog n).
    Show that for 0<c<1, (log n)c belongs to O(log n/loglog n) (this was the worst-case amortized cost for a union).
    Show that loglog n belongs to O(log n/loglog n) (this was the extra time we had to spend in the bottom of the structure when we abandonned the stacks of pointers in order to get space usage down to O(n)).
  4. Consider a standard heap. We discuss the deletemin operation which does the following: it removes the root element, takes the currently last element E and places that in the root, and then "bubbles it" down to the correct location. Recall that 2 log n comparisons may be necessary after a deletemin operation (log is base 2). Where does the constant 2 in front of log n come from?

    How quickly (in terms of comparisons) can you find the path to a leaf down which the element E placed in the root will travel? Note that for now you do not have to do any swapping; you only have to identify the path.

    Assume that we store the heap in an array and that the leaf in question is stored in entry x. Which entries in the list does the path mentioned above correspond to, expressed in terms of x?

    Can you now find the correct location for E faster?

    How quickly (in terms of comparisons between elements) can you insert the element into the correct location? What is the total number of comparison for the entire operation?

  5. Double-Ended Priority Queues (pdf)

Lecture October 13


Last modified: Tue Oct 4 11:26:44 CEST 2016
Kim Skak Larsen (kslarsen@imada.sdu.dk)