# May 21, 2015 Big O Notation Primer

If you’ve ever competed in any of our data science challenges you’ve probably seen a discussion or tutorial that looks like:

Let

f,gbe positive non-decreasing functions defined on positive integers. We say thatf(N) isO(g(N))(read:if for somefis big-oh ofg)candN_{0}the following condition holds…

These symbols may seem strange those that haven’t competed in algorithm challenges, so what do they mean? Perhaps a recipe for disaster!? Fear not as they are not a a harbinger of death but a notation we use for articulating how long an algorithm takes to execute. This lets us compare the efficiency between different approaches to the same problem.

As defined by wikipedia, Big O notation describes the limiting behavior of a function when the argument tends towards a particular value or infinity, usually in terms of simpler functions. Here are some of the more common algorithms you’ll run into.

**O(1)**

O(1) represents an algorithm that takes the same amount of time to execute regardless of the number of inputs. So, 1 item takes 1 second, 10 items take 1 second, 100 items take 1 second and so on. Therefore performance is not affected by the size of the input data.

**O(N)**

O(N) represents an algorithm where the size of the input data impacts the execution time. The performance of the algorithm is directly proportional to the number of inputs. So, 1 item takes 1 second, 10 items take 10 seconds, 100 items take 100 seconds and so on.

**O(log N)**

O(log N) represents an algorithm where the number of computations grows linearly as input data grows exponentially. So 1 item takes 1 second, 10 items take 2 seconds, 100 items take 3 seconds and so on.

**O(2N)**

O(2N) represents an algorithm where execution time is doubled for each additional input. So 1 item takes 2 seconds, 2 items take 4 seconds, 3 items take 8 seconds and so on.

**O(N!)**

O(N!) represents a factorial algorithm that must perform N! calculations. So 1 item takes 1 second, 2 items take 2 seconds, 3 items take 6 seconds and so on. An example of a this algorithm is one that recursively calculates fibonacci numbers.

**O(N ^{2})**

O(N^{2}) represents an algorithm that is directly proportional to the square of the sizes of the inputs and must performs N^{2} calculations (by definition). Bubblesort is a good example of this algorithm.

**O(N log N)**

(N^{2}) represents an algorithm that will in increase in execution time proportionate to the number of the input times the logarithm of the number of the input. Mergesort and quicksort are good examples of this algorithm.

The Big-O Cheat Sheet has a great graph of some of these.

For more info check out Big O notation on wikipedia and this answer on Stackoverflow.