Introduction to Algorithm
Some Notes of Introduction to AlgorithmPermalink
Fiboniacci NumberPermalink
[Fn+1FnFnFn−1]=[1110]nRunning Time=θ(log2(n))
Order StatisticsPermalink
Given n elements in an array, find kth smallest element.
- Quick Select
- Expected running time θ(n)
- Worse case θ(n2)
- Worse-case linear time order statistics
Select(i, n) 1. Divide n elements into [n/5] groups of 5 elements each. Find the median of each group. O(n) 2. Recurrsively select the medium x of the [n/5] group medians. T(n/5) 3. Partition with x as pivot, let k = rank(x). O(n) 4. if i==k then return x if i<k then recurrsively select ith smallest element in left part else then recurrsively select (i-k)th smallest element in upper part
Hash FunctionsPermalink
Division MethodPermalink
h(k)=k mod m
pick m to be prime and not too close to power of 2 or 10.
Multiplication MethodPermalink
h(k) = A⋅k mod 2w » (w−r), A odd∧2w−1 < A < 2w
Universal HashingPermalink
Let u be a universe of keys, and let H be a finite colleciton of hash functions mapping U to {0,1,…,m−1}.
H is universal if ∀x,y∈U,x≠y
|{h∈H;h(x)=h(y)}|=|H|/mi.e. if h is chosen randomly from H, the probability of collision between x and y is 1/m.
Perfect HashingPermalink
Given n keys, construct a static hash table of size m=O(n) such that searching takes O(1) time in the worst case.
Idea: 2 level scheme with universal hashing at both levels and NO collisions at level 2.
if ni items that hashes to level 1 slot i, then use mi=n2i slots in the level 2 table Si.
Augmented Data StructuresPermalink
Dynamic Order StatisticsPermalink
Supports: Insert
, Delete
, Search(x)
, Select(i)
, Rank(x)
.
Idea: use a R-B tree while keeping sizes of the subtree.
size[x]=size[left(x)]+size[right(x)]+1
Select(root, i):
k = size[left(x)] + 1 // k = rank(x)
if i == k then return x
if i < k then return Select(left(x), i)
else return Select(right(x), i - k)
Running Time=θ(log2(n))
Interval TreePermalink
Supports: Intert
, Delete
, Interval-Search
: Find an interval in the set that
overlaps a given query interval.
Idea: use a R-B tree while keeping the largest value m in the subtree.
m[x]=max{high[int[x]],m[right(x)],m[left(x)]}Interval-Search(i) // finds an interval that overlaps i
x = root
while x != nil and (low[i] > high[int[x]] or low[int[x]] > high[i]) do // i and int[x] don't overlap
if left[x] != nil and low[i] <= m[left[x]] then x = left[x]
else x = right[x]
return x
Amortized AnalysisPermalink
Potential MethodPermalink
Framework:
- Start with data structure D0
- operation i transforms Di−1→Di
- cost of the operation is ci
- Define a potential function:
- Amortized cost ^ci with respect to Φ is
- Total amortized cost of n operations is
Competitive AnalysisPermalink
An online algorithm A is α-competitive if ∃k such that for any sequence of operations S,
CostA(S)≤α⋅Copt(S)+kwhere Copt(S) is the optimal, off-line, “God’s” algorithm.
Karp-Rabin Algorihm: Find s in tPermalink
Rolling Hash ADT:
r.append(c)
: r maintains a string x where r=h(x), add char c to the end of xr.skip()
: delete the first char of x. (assume it is c).
Then just use ADT to “roll over” t to find s.
Note: If their hashes are equal, there is still a probability ≤1/|S| that they are actual not the same string.
To implement ADT: use hash simple hash function h(k)=kmodm where m is a random prime ≥|S|
We can treat x as a multidigit number u in base a, where a is just the alphabet size.
So:
- r()=umodm
- r stores umodm and |x|, (really a|x|), not u.
r.append(c)
u = u * a + ord(c) mod m
= [(u mod p) * a + ord(c)] mod m
= [r() * a + ord(c)] mod m
r.skip(c) // assume char c is skipped
u = [u − ord(c) * (pow(a, |u| - 1) mod p)] mod p
= [(u mod p) − ord(c) * (pow(a, |u| - 1) mod p)] mod p
= [r() − ord(c) * (pow(a, |u| - 1) mod p)] mod p
Comments