July 8, 2021
Using Big O notation to improve app performance

The user experience is essential in modern software, and performance is vital to a good experience. Modern software is all about performance, and it can make or break your ability to engage and retain users. Applications designed with performance in mind have a greater chance of success against those that were not.

A common misconception is that a simple piece of code cannot do any harm. On the contrary, you should always presume that the consequences of adding a piece of code can be worse than you imagine. The flip side is that it only takes a few lines of code to significantly improve your app’s performance.

In this guide, we’ll explore one of the easiest ways to improve performance in modern applications: using Big O notation to measure the complexity of your code.

What is Big O notation?

Big O notation is a mathematical process that describes the complexity of an algorithm. It’s a very important concept in the computer science field that describes how the complexity of an algorithm will grow based on the size of the input.

Image by Huyen Pham.

There are two ways of measuring the complexity of an algorithm:

Space complexity  measures the exact amount of space an algorithm will take according to the input size. It is essentially measured by calculating the space occupied by variables in an algorithm

Time complexity  measures the exact amount of time an algorithm will take according to the input size. It essentially depends on how many steps an algorithm needs to perform before it finishes execution

We can calculate the time complexity of an algorithm by measuring how long it is going to take to run that algorithm. When calculating the complexity of an algorithm, we take into consideration three scenarios:

Best case  —  When the algorithm will complete in the fastest time possible. This is always the optimal solution

Average case  —  When the algorithm will complete in an average time

Worst case  —  When the algorithm will complete in the slowest time possible. This is always the pessimal solution

When measuring the complexity of an algorithm using Big O notation, you should always consider the worst-case scenario. The “O” in Big O notation stands for the order of the function and the “n” stands for the number of inputs.

O(1)

The best time complexity for an algorithm is the constant time, also known as O(1). Algorithms with constant time will always take the same amount of time to execute. The execution of this algorithm is independent of the size of the input.

Imagine we have a function that returns the square of a number:

const returnSquare = (num) => num * num;

The returnSquare function will always take the same amount of time to execute. This is how constant time works, an algorithm that runs in the same amount of time, no matter the size of the input.

Now, imagine we have a function that receives an array. We want to always return the first element of the array no matter the size of the array.

const getFirstItem = (arr) => arr[0];

The getFirstItem function has a constant time complexity because it will run in the same amount of time no matter how much the array grows in size.

O(n)

The most common time complexity is the linear time complexity, also known as O(n).

An algorithm has a linear time complexity when the time it takes to run changes linearly to the size of the input.

Imagine that we have a simple array and we want to iterate all over the array to find a specific item:

const searchItem = (arr, item) => {
for (let i = 0; i < arr.length; i++) {
if (arr[i] === item) {
return item;
}
}
}

In the best-case scenario, the item we’re looking at is the first item and we don’t need to map all over the entire array. Worst case scenario, the item can be the last one and we’ll need to iterate all over the entire array.

As our array grows, the time complexity of this algorithm grows linearly. Every time we see a loop on our algorithm, we can assume that that code can be a linear time complexity algorithm.

O(log n)

You may have studied logarithms in school. Logarithms are mathematical operations that determine how many times a certain number needs to be multiplied by itself to reach another number.

Imagine that we have an array of 10 elements and we take one second to iterate all over the array. As the time complexity of this algorithm grows, we would take two seconds to iterate all over the array of 20 elements, three seconds on an array of 30 elements, and so on.

A good example of an O(log n) algorithm is a binary search. A binary search finds the position of a specific element in a sorted array by dividing the array in half in each iteration:

​​Image made using Excalidraw.

In each step, the algorithm reduces the size of the problem by half. Take the binary search algorithm as an example: each iteration divides the array until it finds the specific item.

O(n ^ 2)

An algorithm has a quadratic time complexity when the runtime is proportional to the square of the size of the input.

Imagine that we have an array, and for each item, we want to loop again to compare the current element:

const findItem = (arr, newArr) => {
for (let i = 0; i < arr.length; i++) {
for (let j = 0; j < newArr.length; j++) {
if (arr[i] === newArr[j]) {
console.log(‘hello!’);
}
}
}
}

This is an example of a quadratic time complexity algorithm. Nested loops cause the time complexity to double. Every time the size of our arrays increases, the complexity increases quadratically.

O(n!)

O(n!) represents the worst time complexity an algorithm might have. When writing code, you don’t want to write a piece of code that has a time complexity of O(n!), also known as factorial time complexity.

An algorithm with O(n!) time complexity reaches infinity much faster than you might imagine. At a factorial time complexity, we are adding a nested loop for every input we have.

It’s good to know that this is possible, but you probably don’t want to write code with this time complexity.

Conclusion

Developers like to measure the strength of code based on readability. There’s nothing wrong with using readability as a benchmark, but it’s not the only one you should consider.

Performance plays a crucial role in all modern software, but writing performant code is not always simple. It’s important to be aware of the level of complexity in your codebase and to avoid creating things that are unnecessary.

Big O Notation can help you to write performant code by measuring the complexity of your code. The concept has been around for many years and continues to help developers write engaging, performant software.

The post Using Big O notation to improve app performance appeared first on LogRocket Blog.

Leave a Reply

Your email address will not be published. Required fields are marked *

Send