Introduction to Complexity Theory/Big O Algorithm Analysis

Big O notation, whilst not being a part of complexity theory, is used to describe upper bound of the time, and space usage of an algorithm. In this notation refers to the size of the input into the algorithm. When it is written that a given algorithm runs “in big O of” a mathematical expression, it refers to the time or amount of time it takes the algorithm to finish executing in relationship to a mathematical modification of the size of the input.

Understanding big O, and how to describe an algorithms in terms of it is necessary in the understanding of the analysis of the relative complexity of any problem.

Big O notation edit

the analysis of algorithms is a major task in computer science. In order to compare algorithms we must have some criteria to measure to efficiency of over algorithm. If it instead is referred to as being   then the time the algorithm runs will grow larger linearly proportional to the size of the input. On a deterministic Turing machine one such algorithm is the linear search algorithm because it accepts an input consisting of a value and a list data, and looks through that list sequentially until it finds the specified value at which point it returns the key and halts, or if it does not find the value, returns a value which could not be a key in order to signify that it did not find the value in the list of data.

An algorithm that runs at   has the time grow logarithmically to the size of the input. An example of an algorithm that runs in   is the binary search algorithm.

  time on a deterministic Turing machine is the lower-bound of all comparison-based sorting algorithms such as Merge sort. --117.196.244.89 04:48, 12 December 2012 (UTC)

How to Determine Big O edit

In order to determine the big O of an algorithm, we look at how the size of the input affects the time that it takes the algorithm to execute. A simple brute-force way to do this is to plot a graph of input size vs. execution time for a given algorithm and fit an equation over the data. Another way to accomplish this is to think through the algorithm execution logically.

When the expression for execution time in terms of input size is a polynomial, the largest term in the polynomial is all that is used and constant factors in that term are dropped. For example,   would simply be written as   because the two functions grow at the same asymptotic rate.

Similarly, the base of a logarithm does not matter when discussing big O. The expression   could, through the change of base formula, be rewritten as  . The constant factor   could then be dropped. Because of this, all logarithmic bases are equivalent in big O notation and are omitted in practice.

Analysis of Linear Search edit

The way in which big O is determined for a particular algorithm is algorithm analysis. Take for example the simple linear search algorithm that we mentioned previously and apply it to the value   and the ordered set  .

Sequentially the algorithm would start from the index of the set,  , and then look at each next value until it finds the value  , thus making four comparisons.

But what about all of the other infinite number of integers that we could have looked for? Let's try the value  . The algorithm will first check   against  ,  ,  ,  ,  , and   until it halts without finding the value and thus makes six comparisons.

From this we can draw that the algorithm will make a number of comparisons equal to the input size,  , and thus is  .

Analysis of Binary Search edit

The binary search algorithm is more efficient than the previously mentioned linear search algorithm, but harder to analyse.

Let's take the same set as we did last time  . This time we will search for the value  . The binary search algorithm divided the set into two equally-sized sets, or almost equally-sized if the set has an odd number of elements, and by looking to the left of the centre (even sized), or the centre value (odd sized) determined which set to look further into. So, in our case, the binary search algorithm divides the set into the subsets   and  . The centre of set   was between the end of   and  , so we have to compare our value to the end of  ,  .  , so if   exists in our ordered set   it must be contained in  . We then compare one to the centre of   and get that it is equal to it, and therefore exists in  .

That was only two comparisons, but what if we again take and search for a value which does not exists in  ,  ? We would compare to  , and then to  , and finally to  , without successfully finding  . This would give us a total of 3 comparisons. But what does it all mean? Let's look at the maximum number of comparisons needed for different input sizes:

Input Size to Comparisons
n Maximum Comparisons Needed
1 1
7 3
8 3
9 4
15 4
16 4
17 5
1023 10
1024 10
1025 11

The pattern that is shown here is that the greatest number of comparisons for a particular input is equal to  , or the greatest integer function of the binary logarithm of the input size. Generalised for very large input sizes, we get that the binary search algorithm is  .

o, Ω, ω, and Θ edit

Activities edit

  • Take the quiz.
  • Write a paper detailing your deconstruction of the big O complexity of the bubble sort algorithm.