(In case you are wondering, ELI5 = Explain Like I am Five.)
In computer science we have the Big-O notation. It is a way to compare the complexity of an algorithm. An algorithm is defined by the how the code solves a problem and we want an algorithm that is fast. For programmers, sometimes it is useful to understand the concept of Big-O thoroughly to produce some examples in code.
To explain what the Big O notation is, let us do some simple algebra.
As we all know, in math, the most basic operations are:
How does it work?
Say we have two numbers – 12345 and 56789. Since primary school if we were to add those two numbers up, we would first line the numbers up to the right then add each digits in the column from right to left. If each column exceeds 10, we add 1 to the left.
12345 +56789 ------- 69134
The number of steps we take is about 5+ (including carrying 10 over). This means adding two 5 digits number takes about 5 steps.
What if we add two 10 digits number? That would take about 10 steps. Two 1000 digits number? 1000 steps.
So we can see a linear pattern here as the problem gets bigger. The complexity is directly proportional to the number of problem/digits (n digits) that we need to add up. Hence for an addition operation, we have a linear complexity. Or as we denote in the Big O notation, O(n).
The same complexity applies to subtraction as well.
For the case of multiplication, that is not linear though. The complexity of a multiplication operation is said to be quadratic, O(n2). This is because for multiplication what we usually do is line the numbers up and multiply each numbers one by one. Then we add them up then move on to the left. So for multiplication two 6 digits number, we need to do 36 multiplication steps plus 10 or 11 additions to ge the end results.
As the problem gets bigger (more n digits), say multiplying two 100 digits numbers, we need 10,000 multiplication + 200 additions. Imagine if it gets really really big say 1 million digits. The multiplication steps will surely be even bigger while the additions could be big, but note that at this point, the additions step are now negligible. Which is why we said multiplication is a quadratic complexity O(n2).
This is another important part of the Big O notation. We only care about the significant portion of the complexity and algorithm.
Why do I need to know this?
The reason why we care about the Big-O notation and the complexity of the algorithm is because this affects the real world in a lot of ways. We always want to find the best case to use to search for answers in the fastest way. Not to mention saves up cost of computational power, resources and cost.
Finally, the two examples given only scratches the surface of the Big O notation. We still have in our hands the logarithmic complexity, constant complexity and factorial complexity which I will leave you guys to research more if you are interested in the subject of algorithm and optimization.
For a visual explanation on Big O, you can watch this video here.
(Image thanks to Sasa.)