big o - What would cause an algorithm to have O(log log n) complexity? -
this earlier question addresses of factors might cause algorithm have o(log n) complexity.
what cause algorithm have time complexity o(log log n)?
o(log log n) terms can show in variety of different places, there typically 2 main routes arrive @ runtime.
shrinking square root
as mentioned in answer linked question, common way algorithm have time complexity o(log n) algorithm work repeatedly cut size of input down constant factor on each iteration. if case, algorithm must terminate after o(log n) iterations, because after doing o(log n) divisions constant, algorithm must shrink problem size down 0 or 1. why, example, binary search has complexity o(log n).
interestingly, there similar way of shrinking down size of problem yields runtimes of form o(log log n). instead of dividing input in half @ each layer, happens if take square root of size @ each layer?
for example, let's take number 65,536. how many times have divide 2 until down 1? if this, get
- 65,536 / 2 = 32,768
- 32,768 / 2 = 16,384
- 16,384 / 2 = 8,192
- 8,192 / 2 = 4,096
- 4,096 / 2 = 2,048
- 2,048 / 2 = 1,024
- 1,024 / 2 = 512
- 512 / 2 = 256
- 256 / 2 = 128
- 128 / 2 = 64
- 64 / 2 = 32
- 32 / 2 = 16
- 16 / 2 = 8
- 8 / 2 = 4
- 4 / 2 = 2
- 2 / 2 = 1
this process takes 16 steps, , it's case 65,536 = 216.
but, if take square root @ each level, get
- √65,536 = 256
- √256 = 16
- √16 = 4
- √4 = 2
notice takes 4 steps way down 2. why this? well, let's rewrite sequence in terms of powers of two:
- √65,536 = √216 = (216)1/2 = 28 = 256
- √256 = √28 = (28)1/2 = 24 = 16
- √16 = √24 = (24)1/2 = 22 = 4
- √4 = √22 = (22)1/2 = 21 = 2
notice followed sequence 216 → 28 → 24 → 22 → 21. on each iteration, cut exponent of power of 2 in half. that's interesting, because connects know - can divide number k in half o(log k) times before drops zero.
so take number n , write n = 2k. each time take square root of n, halve exponent in equation. therefore, there can o(log k) square roots applied before k drops 1 or lower (in case n drops 2 or lower). since n = 2k, means k = log2 n, , therefore number of square roots taken o(log k) = o(log log n). therefore, if there algorithm works repeatedly reducing problem subproblem of size square root of original problem size, algorithm terminate after o(log log n) steps.
one real-world example of van emde boas tree (veb-tree) data structure. veb-tree specialized data structure storing integers in range 0 ... n - 1. works follows: root node of tree has √n pointers in it, splitting range 0 ... n - 1 √n buckets each holding range of √n integers. these buckets each internally subdivided √(√ n) buckets, each of holds √(√ n) elements. traverse tree, start @ root, determine bucket belong to, recursively continue in appropriate subtree. due way veb-tree structured, can determine in o(1) time subtree descend into, , after o(log log n) steps reach bottom of tree. accordingly, lookups in veb-tree take time o(log log n).
another example hopcroft-fortune closest pair of points algorithm. algorithm attempts find 2 closest points in collection of 2d points. works creating grid of buckets , distributing points buckets. if @ point in algorithm bucket found has more √n points in it, algorithm recursively processes bucket. maximum depth of recursion therefore o(log log n), , using analysis of recursion tree can shown each layer in tree o(n) work. therefore, total runtime of algorithm o(n log log n).
o(log n) algorithms on small inputs
there other algorithms achieve o(log log n) runtimes using algorithms binary search on objects of size o(log n). example, x-fast trie data structure performs binary search on layers of @ tree of height o(log u), runtime of operations o(log log u). related y-fast trie gets of o(log log u) runtimes maintaining balanced bsts of o(log u) nodes each, allowing searches in trees run in time o(log log u). tango tree , related multisplay tree data structures end o(log log n) term in analyses because maintain trees contain o(log n) items each.
other examples
other algorithms achieve runtime o(log log n) in other ways. interpolation search has expected runtime o(log log n) find number in sorted array, analysis involved. ultimately, analysis works showing number of iterations equal number k such n2-k ≤ 2, log log n correct solution. algorithms, cheriton-tarjan mst algorithm, arrive @ runtime involving o(log log n) solving complex constrained optimization problem.
hope helps!
Comments
Post a Comment