Computers can only do a limited number of operations in a given time, usually about a billion operations per second. Here, by an individual operation, we mean addition, multiplication, assigning a value to a variable, etc. Hence, if we want our algorithm to finish executing in a second or so, then we need to make sure that the total number of operations we are asking it to do is less than 1 billion.
Estimating the exact run-time of an algorithm without implementing and executing it is very tough since it depends on so many factors - including the programming language being used, the compiler, the hardware of the computer itself, and more. Instead, what we'd like to do most of the time, is to estimate the run-time approximately.
Rate of growth
In algorithms, we focus on estimating the execution ...
Motivation: Given an array of N numbers, you need to support two operations. Operation 1: find-min(i, j) = return the minimum value in array[i ... j]. Operation 2: update(i, v) = update the value at array[i] to v. Solve the problem for N <= 10^6, number of operations <= 10^6.
To solve the above problem, both the operations need to run in O(log N) time, but using an naive array gives O(N) run-time for operation 1 (and O(1) run-time for operation 2). So how do you solve the problem? Read on. :)
Video tutorial: This is a superb tutorial, giving the motivation, walking through example, and going step-by-step through the pseudocode.
In this tutorial we will be discussing dynamic programming on trees, a very popular algorithmic technique that solves many problems involving trees. We'll be learning this technique by example. We'll take a problem solving approach in this tutorial, not just describing what the final solution looks like, but walking through how one might go about solving such problems.
A first example
Consider a tree in which every node has a positive value written on it. The task is to select several nodes from the tree (you may also select none), so that the sum of values of the nodes you select is maximized, and no two adjacent nodes are selected. The desired solution has complexity O(n).
I think variable 'parent' in dfs cal can be eliminated.
Read more… (10 words)
Read (10 words)
All Pairs Shortest Path (Floyd Warshall)
Floyd Warshall algorithm is an All-Pairs shortest path algorithm. This means it calculates the value of the shortest path between each pair of nodes in a graph.
The most important questions are:
1: What are the main differences between Floyd Warshall and Dijkstra?
Floyd Warshall algorithm is a Dynamic Programming based algorithm. The technique used in this algorithm is known as Matrix Chain Multiplication. A shortest path between 2 nodes in a graph would be a chain of nodes.
Floyd Warshall algorithm runs in O(V^3) where V is the number of nodes in our graph.
Since there isn't a more efficient implementation for sparse graphs, other shortest path algorithms are often better suited for sparse graphs.
2: Why don't we make Dijkstra from each node in our ...
Initially, we need each source shortest path in a matrix(already given to us). Now with each new road built how will you update this distance matrix? (You need not run a Floyd Warshall for the new updation. As this would cause a TLE).
Now we introduce a new road a---b. With this new road, either Distance[i][j] remains unchanged or Distance[i][j] should be reduced. Distance[i][j] will only reduce further if the new path i, v1, v2, ..., vn, j which includes the road a---b(or b---a), makes the new path lesser than Distance[i][j].