Big O Notation and Time Complexity

David Chung
Level Up Coding
Published in
5 min readJul 9, 2020

--

Big O notation is a simplified analysis of an algorithm’s efficiency. Big O notation gives us an algorithm’s complexity in terms of input size, N. It gives us a way to abstract the efficiency of our algorithm or code from the machines/computers they run on. We don’t care how powerful our machine is, but rather, the basic steps of the code. We can use big O to analyze both time and space. I will go over how we can use Big O to measure time complexity using Ruby for examples.

Types of measurement

There are a couple of ways to look at an algorithm’s efficiency. We can examine worst-case, best-case, and average-case. When we examine big O notation, we typically look at the worst-case. This isn’t to say the other cases aren’t as important.

General rules

  1. Ignore constants

5n ->O(n)

Big O notation ignores constants. For example, if you have a function that has a running time of 5n, we say that this function runs on the order of the big O of N. This is because as N gets large, the 5 no longer matters.

2. In the same way that N grows, certain terms “dominate” others

Here’s a list:

O(1) < O(logn) < O(n) < O(nlogn) < O(n²) < O(2^n) < O(n!)

We ignore low-order terms when they are dominated by high order terms.

Constant Time: O(1)

This basic statement computes x and does not depend on the input size in any way. This is independent of input size, N. We say this is a “Big O of one” or constant time.

total time = O(1) + O(1) + O(1) = O(1)

What happens when we have a sequence of statements? Notice that all of these are constant time. How do we compute big O for this block of code? We simply add each of their times and we get 3 * O(1). But remember we drop constants, so this is still big O of one.

Linear Time: O(n)

The run time of this method is linear or O(n). We have a loop inside of this method which iterates over the array and outputs the element. The number of operations this loop performs will change depending on the size of the array. For example, an array of size 6 will only take 6 iterations, while an array of 18 elements would take 3 times as long. As the input size increases, so will the runtime.

Exponential(Quadratic) Time: O(n²)

When we have nested loops, usually our runtime is exponential, or O(n²). While we are in the nested loop, the outer loop will make its first iteration, then the inner loop will iterate over entirely before going back to the second iteration of the outer loop. When you are working with large arrays, this becomes very inefficient because the runtimes for these programs become increasingly long. The loop will square the amount of iterations based on the input size. An array size of 10 will take 100 iterations.

Logarithmic Time: O(log n)

Logarithmic run time is a very efficient big O notation. Rather than explaining how a logarithmic function works (because i’m terrible at math), giving an example might be easier. An example that always uses logarithmic time is binary search. Say we have an array of numbers that is sorted:

If we were told to search for the number 14, we could just iterate over the entire array until we hit 14(linear time). Binary search makes this easier by starting at the middle of the collection and compares the the two values next to it. Let’s say we start at 9. We would consider whether the number 14 is to the right or left of number 9. We then ignore the left side of 9 because 14 is higher than 9.

All we need to do is continue the same exact pattern until we reach our goal.

Depending on where you start doing the value search, we could end the operation here! Instead of taking 14 iterations, we only needed to do 3–4 searches.

Conclusion

Knowing why we use big O notation is important when working with algorithms. We can develop efficient algorithms which can lead to less code(space complexity) and faster programs. I did not go over anything above quadratic time, but know that these run times are extremely slow. In practice, be mindful on how long your algorithms take. Depending on the application you are working with, scalable and clean code will often lead to better programs. Always be on the look out for the best-case and worst-case run times!

--

--