Weekly Note 9, DM42, fall 2006

Discussion Section November 6

  1. Consider an algorithm which needs a really large array, initialized to all zeros, in order to perform random memory access to a few entries (which are unknown before execution). In that case, we would prefer not to have to initialize such a large array in advance since we are only planning to use a few entries. Inspired by the timestamp verification procedure from the lecture, consider how to avoid this complete initialization by being able to verify if an entry has been set before it is used. It is possible to obtain this by using three arrays instead of just the one array, and we refer to this as virtual initialization. Hint: one of the arrays is used as a stack.
    This technique makes sense when initialization would ruin the complexity. In practical scenarios, some languages would insist on initializing. In those cases, it can still be usefull when a process is repeated many times. Then the arrays would be permanently allocated and could each time, after the first, be initialized in constant time.
  2. In [WT89], could we just as well use union by rank as union by weight (size)?
  3. In [WT89], instead of letting find move references of nodes such that they point to the grandparent before the move (if it exists), what would happen if we move them to point to their great grandparents or great great grandparents (if they exist)?
    Let us refer to the parent of a node as its 1-ancestor, its grandparent as its 2-ancestor, etc. What would happen if we change pointers to point to their max(floor(log1000n), 2)-ancestor (if it exists)?
  4. In the lecture on [WT89], we aimed at an amortized time complexity of O(log n/loglog n). In the arguments, other complexities showed up, and the final result depended on these other complexities belonging to O(log n/loglog n).
    Show that for 0<c<1, (log n)c belongs to O(log n/loglog n) (this was the worst-case cost for a union).
    Show that loglog n belongs to O(log n/loglog n) (this was the extra time we had to spend in the bottom of the structure when we abandonned the stacks of pointers in order to get space usage down to O(n)).
  5. Consider the data structures covered in the course. Which can be made partially persistent using the node-copying method from [DSST89]?
  6. Try the method from [DSST89] on a singly-linked list. The list must be kept sorted and there is one access pointer (to the first element in the list). Insert the elements (in that order): 1, 5, 9, 2, 3, 4, 8, 7, 6. Then delete: 5, 4, 3, 2, 1, 7, 8, 9, 6. Change version after each update (19 versions in total). Assume that there is one extra pointer. [We will not necessarily cover all of this in class.]
  7. Assume we have a singly-linked list of length n. We now repeat indefinitely: change to new version and make a change in the last element of the list. How often is the first element of the list copied using the method from [DSST89]? Assume that there is one extra pointer.
  8. Show that if there are fewer extra pointer fields than the number of inverse pointers, then the method from [DSST89] may use time which is not amortized constant per update.

Lecture November 8


Last modified: Wed Nov 8 14:16:41 CET 2006
Kim Skak Larsen (kslarsen@imada.sdu.dk)