The Big Ideas Behind Big O Notation

When approaching a programming challenge, there will always be multiple ways to implement a solution. Sometimes the differences will be trivial stylistic choices. Other times, they can be the reason one solution works at scale and another other cracks under pressure. To determine the most effective way to solve a problem, programmers analyze the complexity of the algorithms used.

Big O notation provides a shared language for describing the complexity of algorithms. It acts a shorthand summary that helps communicate in broad strokes the general degree of complexity of a process. This article will attempt to shed light on the core ideas behind Big O as well as a few of the most commonly referenced examples.


A function may work very quickly, but still consume a large amount of computing resources, while another may utilize minimal memory but require many iterations that lead it to take a greater amount of time.

Worst Case Scenarios

array = [1,0,0,0,0,0,0,0,0]

Imagine you are writing a function that tries to find the position of the 1 in an array like the one above. One very simple solution would be to start at the beginning of the array and step through each item until you arrived at the desired 1.

def indexOfOne(array)
array.each.with_index do |item,index|
return index if item == 1

In the example array, this would be extremely fast because the first step would yield a match. Imagine if the same strategy (algorithm) were applied to the following array.

array = [0,0,0,0,0,0,0,0,1]

Here, the same approach would take 9 steps because it wouldn’t encounter the 1 until the very end of the array. So is this a slow algorithm or a fast one?

In this case, only the second example matters. Big O notation only concerns the worst possible outcome of a process. What is the longest it could take? What is the maximum amount of memory or disk space it could require?


The main factor here is inputs. The input in the first example was a short array of size nine. Big O describes how fast an algorithm’s running time or resource consumption will increase as its inputs increase.

Consider the following two functions:

Function A

def itemIncreaser(array)
array.each |item| { return item + 1 }

Function B

def itemConverter(array)
array.each |item| do
item += 1
item += 2
item += 3
return item

As far as Big O notation is concerned, both functions are equally complex. How is this possible when they look so different? Remember, Big O is about inputs, so consider what happens at different scales with each function.

With an input array of 10 items, Function A would need to take one step per item, simply incrementing each by 1 and returning the result. Thus we could say that it will take 10 “steps” to work through 10 items. Function B is longer, and will require 3 steps per item, thus it would take 30 total steps for the same 10 item input.

Now, consider what happens when the same functions are given an input array of length 1,000. Unsurprisingly, Function A would take 1,000 steps, while Function B would take 3,000. The key point here is that although Function B requires 3 times as many steps as Function A, the rate of change is consistent across both functions. With any given inputs, at any scale, Function B will always take 3 times as many steps as Function A.


Note that runtime is not exact — this isn’t designed to say that these functions will take precisely 200 milliseconds per input item or utilize 120KB of disk space. Instead, O(N) means that the runtime of the function will increase at a rate that is directly proportional to the number of inputs (N). An algorithm with an O(N) runtime will take steadily more time given more inputs.

While there are numerous different types of runtimes, knowing a handful of the most common will enable an engineer to quickly convey the general complexity (and ability to scale) of an algorithm.

”This function runs quickly in the test environment, but how well will it handle the full dataset in production?”

“It will scale predictably, it has an O(N) runtime.”

Example Runtimes

O ( 1 )

def myFunction(array)

Regardless of what happens inside the function named doSomethingWith, this function will always take the same amount of time no matter how large the input array is because it only uses the first element of the array. When execution time is independent of inputs like this, the algorithm is said to have an O(1) runtime.

O ( N² )

results = []def multiplier(array)
array.each do |x|
array.each do |y|
results.push(x * y)

One quick indication of this type of runtime is nested loops. The function will generate four results if given an array of two digits. For example an input array of [3,5] would yield a results array of [9,15,15,25]. An input of three would yield nine results. An input of 100 would yield 10,000. Note how dramatically the amount of space required is rising with growing inputs.

This is a common pattern that can create significant problems at scale. When algorithms rely on nested loops, there is a great likelihood that they will fall into this category. Similarly, if the function above added another nested loop inside of the existing ones, it would be said to have a runtime of O(N³).

O ( 2ᴺ )

def doRecursion(input)
return input if input <= 1
return doRecursion(input - 2) + doRecursion(input - 1)

This function uses itself internally and is thus recursive. Each iteration will trigger two more calls to the function, and each of those will do the same. If the recursive function called itself three times, it would instead be marked as O(3ᴺ).

This is one of the most complex runtimes, as a moderate increase in input size can result in a massive increase in resulting execution time or memory space.

O ( log N )

A binary search is a method of finding an element in a sorted array. It steps through the array, dividing the array in half at a pivot point at each step, and determining if the desired element will be before or after the pivot point. The following steps illustrate how a binary search would be used to find the 5 in an array of fifteen numbers.


The pivot point starts at the midpoint of the full array. Because 5 is less than 12, it must fall to the left of the pointer given that the array is sorted. Thus, the 12 and all elements to the right are discarded and a new pivot point is placed at the center of the remaining portion of the array.

[0,1,3,3,4,5,8,12                     ]

Because 5 is greater than 4, the 4 and all elements to the left are discarded and the pivot moves again to the new center of the array.

[          5,8,12                     ]

With one final repetition of this process the pivot will end up on the desired number, 5.

[          5                          ]

This binary search is an example of an O(log N) runtime. The input size can increase greatly without increasing the number of steps needed to find a result because the inputs are halved at each step. Doubling the length of the input to 30 elements would only add a single extra step in the previous example, because the first step would immediately cut the input array back in half to 15 elements.

Because of this relationship between input size and time/space requirements, logarithmic algorithms can be effective at scale where an O(N) or O(N²) algorithm would begin to become prohibitively expensive.

Further Resources

A beginner’s guide to Big O notation — A brief but clear summary of common runtimes, and one of the resources that I consulted while writing this article.

Big-O Algorithm Complexity Cheat Sheet — I highly recommend taking a look at this page to see its visual comparing the complexities of various runtimes.

Big O notation: definition and examples — Somewhat math-heavy but rich dive into Big O, complete with handy visuals.



Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store
Edward Saavedra

Full stack software developer by day. Writer, illustrator, and general creative dabbler by night.