CommonLounge Archive

Why do we need (efficient) algorithms?

September 13, 2016

Introduction

Computers can only do a limited number of operations in a given time, usually about a billion operations per second. Here, by an individual operation, we mean addition, multiplication, assigning a value to a variable, etc. Hence, if we want our algorithm to finish executing in a second or so, then we need to make sure that the total number of operations we are asking it to do is less than 1 billion.

Estimating the exact run-time of an algorithm without implementing and executing it is very tough since it depends on so many factors - including the programming language being used, the compiler, the hardware of the computer itself, and more. Instead, what we’d like to do most of the time, is to estimate the run-time approximately.

Rate of growth

In algorithms, we focus on estimating the execution time as a function of the inputs. In particular, the main focus is how quickly does the execution time grow as a function of the inputs. For example, $f(n) = 5n$ and $g(n) = 100n + 1000$ are equivalent functions in terms of growth with respect to $n$. We simply say both the functions grow linearly. Whereas, $f(n) = 5n$ and $h(n) = n^2$ are not equivalent, since $h(n)$ grows quadratically.

Let’s take an example. Take a look at the following pseudo-code.

def function1(n):
    a = 0
    for (i = 0; i < n; i += 1):
        for (j = 0; j < n; j += 1):
            a += 1
    return a
def function2(n):
    a = 0
    for (i = 0; i < n; i += 1):
        a += n
    return a
def function3(n):
    return n * n

Big-O notation

In the above code, function1 takes time proportional to $n^2$. In particular, notice that the operation a += 1 executes $n^2$ times. Similarly, function2 takes time proportional to $n$ and function3 takes constant amount of time to run regardless of the values of $n$. We write the run-times of function1 as O($n^2$), function2 as O($n$) and function3 as O(1).

Having a run-time of O($n$) means that the run-time is less than or equal to cn, for some constant value c (doesn’t depend on n). For function2 in the above example, assuming variable assignment, addition, comparison are one operation each, we can say that number of operations being done is < 10n. In particular, we do approximately $2n$ assignments, $2n$ additions and $n$ comparisons, which is a total of $5n$ operations (Here, we’re treating a += 1 as an addition operation followed by an assignment operation. Don’t forget to count operations done on the loop variable i).

Writing a function in Big-O notation

Converting a function to big-O notation requires two simple rules.

  1. Only consider the highest-order term. (Ignore all lower order-terms.)
  2. Ignore any coefficients or constant factors.

Let’s take an example. Suppose the total number of operations being done is $5n^2 + 30n + 6$. In big-O notation, this function is $O(n^2)$. To get this, first we drop the lower order terms ($30n$ and the constant $6$), and only keep the highest-order term $5n^2$. Next, we drop the constant factor 5.

Plot of 5n^2 vs 30n + 6. As n grows larger, 30n+6 becomes negligibly small compared to 5n^2.

In our earlier example, we had $f(n) = 5n = O(n)$ and $g(n) = 100n + 1000 = O(n)$. Since both $f(n)$ and $g(n)$ are O($n$), we said $f(n)$ is equivalent to $g(n)$. However, since $h(n) = n^2 = O(n^2)$, we concluded that $h(n)$ is not equivalent to $f(n)$.

Why do we drop the constant factors?

The big-O notation ignores constants to focus on rate of growth. In the previous example, we can see that when we have a value of $n$ that is 4 times larger, both $f(n)$ and $g(n)$ will become 4 times larger. However, $h(n)$ will become 16 times larger. This is what we mean when we say that big-O notation focuses on how quickly does a function grow.

In practice though, we can’t completely ignore constant factors. Usually all we need to keep in mind regarding constants is that if we did 10 times more operations in the part of the code executing the most number of times, then the code will take 10 times longer to execute. In the code examples we saw, this would be doing 10 times more operations inside the innermost for loop. That is, we try to roughly estimate the constant factor for the highest-order term.

However, assigning the constants meaning in an absolute sense is not feasible since there are other constants buried all over the place. For example, different operations - addition vs multiplication vs comparison - take different amounts of time to execute. Each hardware takes different amounts of time to perform different operations, and some hardwares might be capable of combining two operations into one (is a += 1 one operation - increment, or two operations - addition followed by assignment).

Common algorithm complexities and their run-times

Below table lists some of the commonly occurring run-time complexities. The second column indicates what is the largest value of $n$ for which the code would finish running within 1 second (assuming 1 billion operations per second, and constant c to be about 5-10). The third column provides examples of algorithms (many of which we’ll be learning in this course) with the stated run-time complexity.

Complexity |   Max n     | Examples
-----------|------------------------------------------------------------------
O(log n)   | Very Large! | Binary search, Exponentiation by repeated squaring
O(n)       | 100,000,000 | Linear search, Breadth-first search
O(n log n) |   5,000,000 | Merge-sort, Quick-sort, Heap-sort, Djikstra's algo
O(n^2)     |      10,000 | Bubble-sort
O(n^3)     |         400 | All-pairs shortest paths
O(2^n)     |          25 | Listing all subsets
O(n!)      |          11 | Listing all permutations

Further reading

  1. Tutorial on Computational Complexity by Michael Forišek | TopCoder. Michal Forišek is one of the organizers of International Olympiad in Informatics.
  2. Asymptotic Notation - Thomas Cormen and Devin Balkcom | Khan Academy

Video tutorial

Finally, if you prefer video tutorials, here is a good video explaining computational complexity in more detail, as well as asymptotic notation (Big-O notation).


© 2016-2022. All rights reserved.