The Big O

Mai Pham
Nerd For Tech
Published in
4 min readJun 27, 2021

--

Beginner’s Guide to Understanding Big-O Notation

Being a beginner in software development, there are an array of concepts to study and understand. However, there is one topic that trumps all. This topic can be quite a head scratcher especially when you are just starting out on your coding journey. That topic is, you guessed it, Big-O Notation! (written with an O, not a zero). In this blog, we will demystify the idea of Big-O and hopefully help you understand it a little better. So what is Big-O Notation? Why is it important to developers when solving algorithms?

Big-O, also known as Landau’s symbol, is a “symbolism used in complexity theory, computer science, and mathematics to describe the asymptotic behavior of functions. Basically, it tells you how fast a function grows or declines”, according to MIT. “The letter O is used because the rate of growth of a function is also called its order.” Edmund Landau is a number theoretician who invented the notation.

Simply put, Big O is used to describe the performance of an algorithm. Imagine we have multiple valid ways of doing things (in this case, various and different ways to solve a problem/write a function). How do we determine which one is best? That’s where The Big O comes in. It is a way of comparing code and its performance to other pieces of code.

Time / Space Complexity of Big O

Let’s discuss the different notations of Big O.

O(1) — Denotes constant complexity.
It simply means that the operation will always execute in a constant amount of time regardless of how big the input size is.

Examples of constant algorithms:

  • Check if a number is even or odd
  • Print the first element from list
  • Remove an item from an object
O(1) complexity

O(n) — Denotes linear time complexity
This simply means that the program visits every element from the input. In other words, it means that the algorithms take proportionally longer to complete as the input grows. The number of operations (steps) it performs scales in direct proportion to the input. As the number of inputs/data increases, the number of operations increases linearly.

Examples of linear algorithms:

  • Find a given element in a collection
  • Get the max/min value
  • Return all the values in a list
  • Sum up to the number given
O(n) complexity of Sum of up the number given written with a loop

O(n²) — Denotes quadratic time complexity
This means that the number of operations it performs scales in proportion to the square of the input. An excellent example of this is checking to see whether there are any duplicates in a list of items. This is common with algorithms that involve nested loops or iterations.

Examples of quadratic algorithms:

  • Sorting algorithms (Bubble, Selection, Insertion)
  • Check for duplicated values
  • Print all ordered pairs in an array
O(n²) complexity of a nested loop algorithm

O(n) operation inside of an O(n) operation → O(n * n) ⇢ O(n²). Simply put, as n grows, the runtime grows at the rate of n squared.

We want to avoid a situation where the time complexity of an algorithm reaches O(n²) as this denotes an algorithm whose growth doubles with each addition to the input. This makes the run time very slow when working with a big data set.

There are a few other time complexities such as cubic, factorial, polynomial and logarithmic but we won’t cover that in this beginner’s blog post. For now, this is all you need to know to have a basic understanding of the Big O.

TL;DR —
Big O is just a way of talking about the runtime of an algorithm; how long it takes for code to run. It’s how we compare the efficiency of different solutions to a problem. With Big O, we express the runtime in terms of how quickly it grows relative to the input, as the input(data set) gets larger. The terms we use to differentiate the runtimes are O(1) or constant, O(n) or linear and O(n²) or quadratic. O(1) being the most efficient, O(n) being the most common and O(n²) being the situation you want to avoid if possible.

So some of you may ask, who cares about the performance of the code if it works and it’s valid? Who cares if it’s best? Well, it matters because…

  • Size of the data set that gets input. One algorithm can save an hour every time it runs compared to another algorithm. This is where it matters.
  • Even if you are content with your solution, it’s important to understand how it compares to another solution.
  • It’s also useful in discussing trade-offs between different approaches. Sometimes it’s not which solution is best but maybe one solution is great for huge data set but the other solution is consistent in the time it runs even if it takes a bit longer to run.
  • INTERVIEWS — you’ll see it in interviews.

I hope this helped you understand a little bit more about the Big-O Notation.

Happy learning!!

--

--