Algorithm's Time Complexity the simple?way
Why should you learn to write efficient algorithms?
- To be a much better software Engineer(and get better jobs/income).
- Spend less time debugging, optimizing and re-writing code.
- Your software will run faster with the same hardware.
What are the algorithms?
Algorithms are a set of instructions to perform a task. For example, when you cook, you follow a recipe to prepare a dish.
How to improve your coding skills?
Measurement is the first step that leads to control and eventually to improvement. If you can’t measure something, you can’t understand it. If you can’t understand it, you can’t control it. If you can’t control it, you can’t improve it.
H. J. Harrington
But, how do you measure your code? Would you bring a stopwatch and measure the time it takes to execute? It doesn’t make sense because it depends on the running device. If you are running the same program on a mobile device or a quantum computer? The same code will give you different results, right? Now it’s the time to know time complexity!
Time Complexity
The time complexity is not about timing how long the algorithm takes. Instead, how many operations are executed. The number of instructions executed by a program is affected by the size of the input and how their elements are arranged.
Why is that the time complexity is expressed as a function of the input? Well, let’s say you want to sort an array of numbers. If the elements are already sorted, the program will perform fewer operations. On the contrary, if the items are in reverse order, it will require more time to get it sorted. So, the time a program takes to execute is directly related to the input size and how the elements are arranged.
We can say for each algorithm have the following running times:
- Worst-case time complexity (e.g., input elements in reversed order)
- Best-case time complexity (e.g., already sorted)
- Average-case time complexity (e.g., elements in random order)
We usually care more about the worst-case time complexity (We are hoping for the best but preparing for the worst).
Calculating time complexity
Here’s a code example of how you can calculate the time complexity: Find the maximum number on an array of numbers
We can represent getMaxElement as a function of the size of the input n based on the number of operations it has to perform. For simplicity, let’s assume that each line of code takes the same amount of time in the CPU to execute. Let’s make the sum:
Line 10: 1 operation
Line 11: 1 operation
Line 13–16: it is a loop that executes size of n times
Line 14: 1 operation
Line 15: this one is tricky. It is inside a condition. We will assume the worst case where the array is sorted in descending order. The condition (if block) will be executed each time. Thus, 1 operation
Line 18: 1 operation
So, we have 3 operations outside the loop and 2 inside the loop. Since the loop goes for the size of n, this leaves us with 3 + 2(n).
However, this expression is somewhat too specific and hard to compare algorithms with it. We are going to apply the asymptotic analysis to simplify this expression further.
Asymptotic analysis
It is a big idea that handles the above issues in analyzing algorithms. In Asymptotic Analysis, we evaluate the performance of an algorithm in terms of input size (we don’t measure the actual running time). We calculate, how the time (or space) taken by an algorithm increases with the input size.
In our previous example 2(n) + 3, we can consider it as 2(n) because they don’t really matter. Here’s the reason, if our array has 1 million inputs, adding two extra operations don’t really have a significant impact on the cost of our algorithm, so we can simplify this to 2(n).
Believe it or not, also the constant 2 wouldn’t make too much of a difference.
Because we need here an approximation of the cost of this algorithm relative to its input size so, n or 2n still represents a linear growth. Now, we can say that this function has a time complexity of n.
Big-O notation and Growth rate of Functions
The Big O notation combines what we learned in the last two sections about worst-case time complexity and asymptotic analysis.
The letter O refers to the order of a function.
The Big O notation is used to classify algorithms by their worst running time or also referred to as the upper bound of the growth rate of a function.
In our previous example with getMaxElement function, we can say it has a time complexity of O(n)
Most Common Time Complexities
1- O(1)- Constant Time.
2- O(log n)- Logarithmic Time.
3- O(n)- Linear Time.
4- O( n log(n) )- Quasilinear Time.
5- O(n2)- Quadratic Time.
6- O(2^n)- Exponential Time.
1- O(1) — Constant time
An algorithm is said to have a constant time when it is not dependent on the input data (n). No matter the size of the input data, the running time will always be the same. For example:
print(“Hello World!”);
another example:
There’re three operations, so why do we consider the time complexity is O(1)? O(1) means it takes a constant number of operations Independently of the input data size.
Do not be fooled by one-liners. They don’t always translate to constant times. You have to be aware of how they are implemented.
If you have a method like Array.sort() or any other array or object methods, you have to look into the implementation to determine its running time.
2- O(log n) — Logarithmic time
An algorithm is said to have a logarithmic time complexity when it reduces the size of the input data in each step (it doesn’t need to look at all values of the input data). For example, we want to look for a word in a dictionary. As you know, words is sorted alphabetically.
If you are looking for a word, then there are at least two ways to do it:
Algorithm A(Linear Search):
- Start on the first page of the dictionary and go word by word until you find what you are looking for.
Algorithm B(Binary Search):
- Open the dictionary in the middle and check the first word on it.
- If the word that you are looking for is alphabetically more significant, then look to the right. Otherwise, look in the left half.
- Divide the remainder in half again, and repeat step #2 until you find the word you are looking for.
Which one is faster? The first algorithms go word by word O(n), while the algorithm B split the problem in half on each iteration O(log n). This 2nd algorithm is a binary search.
Steps of the binary search:
- Calculate the middle of the list.
- If the searched value is lower than the value in the middle of the list, set a new right bounder.
- If the searched value is higher than the value in the middle of the list, set a new left bounder.
- If the search value is equal to the value in the middle of the list, return the middle (the index).
- Repeat the steps above until the value is found or the left bounder is equal or higher the right bounder.
3- O(n) — Linear time
An algorithm is said to have a linear time complexity when the running time increases at most linearly with the size of the input data. It means that as the input grows, the algorithms take proportionally longer to complete.
We have taken an example of O(n)(getMaxElement). Let’s take another one, which is linear search:
Note that in this example, we need to look at all values in the list to find the value we are looking for.
4- O(n log n) — Quasilinear(Linearithmic)
It’s slightly slower than a linear algorithm. Let’s take an example:
The first loop is reduced each step by dividing by two we have seen in the binary search algorithm. So, its time complexity is O(log(n)). In the second loop, the running time increases at most linearly with the size of the input data. So its time complexity is O(n). Then, the time complexity of this example is O(n log(n)).
Other examples of Quasilinear algorithms are merge sort and quicksort.
5- O(n2) — Quadratic time
A function with a quadratic time complexity has a growth rate of n2. If the input is size 2, it will do four operations. If the input is size 8, it will take 64, and so on. Let’s take an example:
Time complexity here is O(n*n) or O(n2). But if we have another loop before or after this nested loop, for example:
Time complexity here is O(n +n2 ). Once again we can simplify this. The term n2 is always greater than the term n so, n2 always grows faster than n.
Again, we use the Big-O notation to understand how much the cost of an algorithm increases. All we need is an approximation not as an exact value. So, we can drop n then, time complexity becomes O(n2).
Another example of Quadratic time complexity is the bubble sort algorithm:
Bubble Sort is the simplest sorting algorithm that works by repeatedly swapping the adjacent elements if they are in the wrong order. The array is traversed from the first element to the last element. Here, the current element is compared with the next element. If the current element is greater than the next element, it is swapped.
6- O(2^n) — Exponential time
Exponential (base 2) time complexity means that the calculations performed by an algorithm double every time as the input grows. example of an exponential time algorithm is the recursive calculation of Fibonacci numbers:
Fibonacci numbers
The Fibonacci numbers are the numbers in the following integer sequence:
0, 1, 1, 2, 3, 5, 8, 13, 21, 34, 55, 89, 144, ……..
In mathematical terms, the sequence Fn of Fibonacci numbers is defined by the recurrence relation
Fn = Fn-1 + Fn-2
with seed values
F0 = 0 and F1 = 1.
If you don’t know what a recursive function is, let’s clarify it quickly: a recursive function may be described as a function that calls itself in specific conditions. As you may have noticed, the time complexity of recursive functions is a little harder to define since it depends on how many times the function is called and the time complexity of a single function call.
It makes more sense when we look at the recursion tree. The following recursion tree was generated by the Fibonacci algorithm using n = 4:
Note that it will call itself until it reaches the leaves. When reaching the leaves it returns the value itself.
Now, look how the recursion tree grows just increasing the n to 6:
You can find a more complete explanation about the time complexity of the recursive Fibonacci algorithm here on StackOverflow.
O(n!) — Factorial time
An algorithm is said to have a factorial time complexity when it grows in a factorial way based on the size of the input data, for example:
2! = 2 x 1 = 2 3! = 3 x 2 x 1 = 6 4! = 4 x 3 x 2 x 1 = 24 5! = 5 x 4 x 3 x 2 x 1 = 120 6! = 6 x 5 x 4 x 3 x 2 x 1 = 720 7! = 7 x 6 x 5 x 4 x 3 x 2 x 1 = 5.040 8! = 8 x 7 x 6 x 5 x 4 x 3 x 2 x 1 = 40.320
As you may see it grows very fast, even for a small size input.
We explored the most common algorithms running times with one or two examples each! They should give you an idea of how to calculate your running times when developing your projects. Below you can find a chart with a graph of all the time complexities that we covered plus some additional complexities:
Big-O Cheat Sheet
To make your life easier, here you can find a sheet with the time complexity of the operations in the most common data structures:
Why it is important to know all of this?
If after reading all this story you still have some doubts about the importance of knowing time complexity and the Big-O notation, let’s clarify some points.
Even when working with modern languages, like Python, which provides built-in functions, like sorting algorithms, someday you will probably need to implement an algorithm to perform some kind of operation in a certain amount of data. By studying time complexity you will understand the important concept of efficiency and will be able to find bottlenecks in your code which should be improved, mainly when working with huge data sets.
Besides that, if you plan to apply to a software engineer position in a big company like Google, Facebook, Twitter, and Amazon you will need to be prepared to answer questions about time complexity using the Big-O notation.
Finally
Thanks for reading this story. I hope you have learned a little more about time complexity and the Big-O notation. If you enjoyed it, please give it a clap and share it. If you have any doubts or suggestions feel free to comment or send me an email. Also, feel free to follow me on Linkedin, and Github.
References
- Big-O Cheat Sheet: https://bigocheatsheet.com/
- Big-O notation: https://en.wikipedia.org/wiki/Big_O_notation
Time complexity:
- https://en.wikipedia.org/wiki/Time_complexity
- https://towardsdatascience.com/understanding-time-complexity-with-python-examples-2bda6e8158a7
- https://adrianmejia.com/most-popular-algorithms-time-complexity-every-programmer-should-know-free-online-tutorial-course/#O-n-Factorial-time
- Asymptotic analysis: https://www.geeksforgeeks.org/analysis-of-algorithms-set-1-asymptotic-analysis/?ref=lbp
Software Engineer at Siren Analytics
4 年Yahya Elharony Waiting for your feedback ?? It looks better in medium