CommonLounge

Categories

Message

Follow

Anudeep Nekkanti

Past: Competitive Programming, Algorithms. Present: Googler turned Entrepreneur.

Active In

International Olympiad in Informatics

Member

Artificial Intelligence

Member

Deep Learning

Member

Startups

Member

Featured Contributions

Contributed 91%

2.

tutorial

MO’s Algorithm (Query square root decomposition)

Once again I found a topic that is useful, interesting but has very less resources online. Before writing this I did a small survey and surprised that most competitive programmers do not know this algorithm. Its important to learn this, all the red programmers on Codeforces use this effectively in Div 1 C and D problems. There were not any problems on this topic an year and half back, but after that there was a spike! We can expect more problems on this in future contests.

- State the problem
- A simple solution which takes O(N^2)
- Slight modification to above algorithm, still runs in O(N^2)
- MO's algorithm to above problem and its correctness
- Proof for complexity of MO's algorithm – O(sqrt(N) * N)
- Explain where and when we can use above algorithm
- Problems for practice and sample code

You are given an array of size N with all elements of array <= N. You need to answer M queries. Each query is of the form L, R. You need to answer the count of values in range [L, R] which are repeated at least 3 times.

Example: Let the array be {1, 2, 3, 1, 1, 2, 1, 2, 3, 1} (zero indexed)

Query: L = 0, R = 4. Answer = 1. Values in the range [L, R] = {1, 2, 3, 1, 1} only 1 is repeated at least 3 times.

Query: L = 1, R = 8. Answer = 2. Values in the range [L, R] = {2, 3, 1, 1, 2, 1, 2, 3} 1 is repeated 3 times and 2 is repeated 3 times. Number of elements repeated at least 3 times = Answer = 2.

For each query, loop from L to R, count the frequency of elements and report the answer. Considering M = N, following runs in O(N^2) in worst case.

for each query:answer = 0count[] = 0for i in {l..r}:count[array[i]]++if count[array[i]] == 3:answer++

add(position):count[array[position]]++if count[array[position]] == 3:answer++remove(position):count[array[position]]--if count[array[position]] == 2:answer--currentL = 0currentR = 0answer = 0count[] = 0for each query:// currentL should go to L, currentR should go to Rwhile currentL &lt; L:remove(currentL)currentL++while currentL &gt; L:add(currentL)currentL--while currentR &lt; R:add(currentR)currentR++while currentR &gt; R:remove(currentR)currentR--output answer

Initially we always looped from L to R, but now we are changing the positions from previous query to adjust to current query.

If previous query was L=3, R=10, then we will have current L=3 and current R=10 by the end of that query. Now if the next query is L=5, R=7, then we move the currentL to 5 and currentR to 7.

add function adds the element at position to our current set and updating answer accordingly.

remove function deletes the element at position from our current set and updating answer accordingly.

MO’s algorithm is just an order in which we process the queries. We were given M queries, we will re-order the queries in a particular order and then process them. Clearly, this is an offline algorithm. Each query has L and R, we will call them opening and closing. Let us divide the given input array into Sqrt(N) blocks. Each block will be N / Sqrt(N) = Sqrt(N) size. Each opening has to fall in one of these blocks. Each closing has to fall in one of these blocks.

A query belongs to P'th block if the opening of that query fall in P'th block. In this algorithm we will process the queries of 1st block. Then we process the queries of 2nd block and so on ... finally sqrt(N)’th block. We already have an ordering, queries are ordered in the ascending order of its block. There can be many queries that belong to the same block.

From now, I will ignore about all the blocks and only focus on how we query and answer block 1. We will similarly do for all blocks. All of these queries have their opening in block 1, but their closing can be in any block including block 1. Now let us reorder these queries in ascending order of their R value. We do this for all the blocks.

How does the final order look like? All the queries are first ordered in ascending order of their block number (block number is the block in which its opening falls). Ties are ordered in ascending order of their R value.

For example, consider following queries and assume we have 3 blocks each of size 3: {0, 3} {1, 7} {2, 8} {7, 8} {4, 8} {4, 4} {1, 2}

Let us re-order them based on their block number: {0, 3} {1, 7} {2, 8} {1, 2} {4, 8} {4, 4} {7, 8}

Now let us re-order ties based on their R value: {1, 2} {0, 3} {1, 7} {2, 8} {4, 4} {4, 8} {7, 8}

Now we use the same code stated in previous section and solve the problem. Above algorithm is correct as we did not do any changes but just reordered the queries.

We are done with MO’s algorithm, it is just an ordering. Awesome part is its runtime analysis. It turns out that the O(N^2) code we wrote works in O(Sqrt(N) * N) time if we follow the order i specified above. Thats awesome right, with just reordering the queries we reduced the complexity from O(N^2) to O(Sqrt(N) * N), and that too with out any further modification to code. Hurray! we will get AC with O(Sqrt(N) * N).

Have a look at our code above, the complexity over all queries is determined by the 4 while loops. First 2 while loops can be stated as “Amount moved by left pointer in total”, second 2 while loops can be stated as “Amount moved by right pointer”. Sum of these two will be the over all complexity.

Most important. Let us talk about the right pointer first. For each block, the queries are sorted in increasing order, so clearly the right pointer (currentR) moves in increasing order. During the start of next block the pointer possibly at extreme end will move to least R in next block. That means for a given block, the amount moved by right pointer is O(N). We have O(Sqrt(N)) blocks, so the total is O(N * Sqrt(N)). Great!

Let us see how the left pointer moves. For each block, the left pointer of all the queries fall in the same block, as we move from query to query the left pointer might move but as previous L and current L fall in the same block, the moment is O(Sqrt(N)) (Size of the block). In each block the amount left pointer movies is O(Q * Sqrt(N)) where Q is number of queries falling in that block. Total complexity is O(M * Sqrt(N)) for all blocks.

There you go, total complexity is O( (N + M) * Sqrt(N)) = O( N * Sqrt(N))

As mentioned, this algorithm is offline, that means we cannot use it when we are forced to stick to given order of queries. That also means we cannot use this when there are update operations. Not just that, there is one important possible limitation: We should be able to write the functions add and remove. There will be many cases where add is trivial but remove is not. One such example is where we want maximum in a range. As we add elements, we can keep track of maximum. But when we remove elements it is not trivial. Anyways in that case we can use a set to add elements, remove elements and report minimum. In that case the add and delete operations are O(log N) (Resulting in O(N * Sqrt(N) * log N) algorithm).

There are many cases where we can use this algorithm. In few cases we can also use other Data Structures like segment trees, but for few problems using MO’s algorithm is a must. Lets discuss few problems in the next section.

- SPOJ DQUERY: D-query ( Solution ) : Number of distinct elements in a range = number of elements with frequency >= 1. Same problem we discussed above.
- Codeforces 86D: Powerful Array : This is an example where MO’s algorithm is a must. I cannot think of any other solution. CF Div1 D means it is a hard problem. See how easy it is using MO’s algorithm in this case. You only need to modify add(), remove() functions in above code.
- CodeChef GERALD07: Chef and Graph Queries
- CodeChef GERALD3: Chef and Substrings
- Codeforces 375D: Tree and Queries
- Codeforces 351D: Jeff and Removing Periods
- CodeChef IITI15: Sherlock and Inversions

I am sure there are more problems, if you know any of them, do comment, I will add them.

While this algorithm has a special name “MO”, it is just smart square root decomposition.

Share this post! Learn and let learn 🙂

Read more…(1463 words)

Category: Competitive Programming

Contributed 100%

3.

tutorial

Persistent segment trees: Explained with SPOJ problems

In this post I will introduce the concept of persistent data structures. We shall deal with a couple of problems: COT, MKTHNUM

Consider the following **question**:

Given a linked list, you need to support 2 operations.

- Update the first x values.
- Print the linked list after k’th update (k <= number of update operations made till now)

Simple solution is to create a new linked list after each update operation. print it when needed. Let us optimize it for memory.

We are only updating first x elements of the linked list. That means the new linked list will be almost same as previous linked list if x is small. We shall take advantage of this and avoid allocating memory for some elements. We can allocate memory for x elements, store new values in them, then point the next pointer of x’th node...

Read more…(1718 words)

Category: Competitive Programming

Contributed 96%

4.

tutorial

Heavy Light Decomposition

This is a long post with lot of explanation, targeting novice. If you are aware of how HLD is useful, skip to “**Basic Idea**”.

A balanced binary tree with **N** nodes has a height of **log N**. This gives us the following properties:

- You need to visit at most
**log N**nodes to reach**root**node from any other node - You need to visit at most
**2 * log N**nodes to reach from any node to any other node in the tree

The **log **factor is always good in the world of competitive programming 🙂

Now, if a balanced binary tree with **N** nodes is given, then many queries can be done with **O( log N )** complexity. Distance of a path, Maximum/Minimum in a path, Maximum contiguous sum etc etc.

A chain is a set of nodes connected one after another. It can be viewed as a simple array of nodes/numbers. We can do many operations on array of elements with **O(** **log N )**complexity using **segment tree / BIT /** other data structures. You can learn more about segment trees here.

Now, we know that Balanced Binary Trees and arrays are good for computation. We can do a lot of operations with **O( log N ) **complexity on both the data structures.

Height of unbalanced tree can be arbitrary. In the worst case, we have to visit **O( N )** nodes to move from one node to another node. So Unbalanced trees are not computation friendly. We shall see how we can deal with unbalanced trees.

**Consider the following question**: Given **A **and **B**, calculate the sum of all node values on the path from **A **to **B**.

Here are details about the given images

- The tree in the image has
**N**nodes. - We need to visit
**N/3**nodes to travel from**A**to**D**. - We need to visit
**>N/3**nodes to travel from**B**to**D**. - We need to visit
**>N/2**nodes to travel from**C**to**D**.

It is clear that moving from one node to another can be up to **O( N )** complexity.

**This is important: **What if we broke the tree in to 3 chains as shown in image below. Then we consider each chain as an independent problem. We are dealing with chains so we can use Segment Trees/other data structures that work well on linear list of data.

Here are the details after the trick

- The tree still has
**N**nodes, but it is**DECOMPOSED**into 3 chains each of size**N/3**. See 3 different colors, each one is a chain. **A**and**D**belong to the same chain. We can get the required sum of node values of path from**A**to**D**in**O( log N )**time using segment tree data structure.**B**belongs to yellow chain, and**D**belongs to blue chain. Path from**B**to**D**can be broken as**B**to**T3**and**T4**to**D**. Now we are dealing with 2 cases which are similar to the above case. We can calculate required sum in**O( log N )**time for**B**to**T3**and**O( log N )**time for**T4**to**D**. Great, we reduced this to**O( log N )**.**C**belongs to red chain, and**D**belongs to blue chain. Path from**C**to**D**can be broken as**C**to**T1**,**T2**to**T3**and**T4**to**D**. Again we are dealing with 3 cases similar to 2nd case. So we can again do it in**O( log N )**.

Awesome!! We used concepts of **Decomposition **and **Segment Tree DS**, reduced the query complexity from **O( N )** to **O( log N )**. As I said before, competitive programmers always love the **log** factors 😀 😀

But wait the tree in the example is special, only 2 nodes had degree greater than 2. We did a simple decomposition and achieved better complexity, but in a general tree we need to do some thing little more complex to get better complexity. And that little more complex decomposition is called **Heavy Light Decomposition**.

We will divide the tree into **vertex-disjoint chains** ( Meaning no two chains has a node in common ) in such a way that to move from **any node** in the tree to the **root** node, we will have to **change at most** **log N** chains. To put it in another words, the path from **any node** to **root** can be broken into pieces such that the each piece belongs to only one chain, then we will have no more than **log N **pieces.

Let us assume that the above is done, So what?. Now the path from any node **A** to any node **B **can be broken into two paths: **A **to **LCA( A, B )** and **B **to **LCA( A, B )**. Details about LCA – Click Here or Here. So at this point we need to only worry about paths of the following format: **Start at some node and go up the tree **because **A **to **LCA( A, B ) **and **B **to **LCA( A, B )** are both such paths.

What are we up to till now?

- We assumed that we can break tree into chains such that we will have to
**change at most log N**chains to move from any node up the tree to any other node. - Any path can be broken into two paths such both paths start at some node and move up the tree
- We already know that queries in each chain can be answered with
**O( log N )**complexity and there are at most**log N**chains we need to consider per path. So on the whole we have**O( log^2 N )**complexity solution. Great!!

Till now I have explained how **HLD **can be used to reduce complexity. Now we shall see details about how **HLD **actually decomposes the tree.

**Note : My version of HLD is little different from the standard one, but still everything said above holds.**

Common tree related terminology can be found here.

**Special Child : **Among all child nodes of a node, the one with maximum sub-tree size is considered as **Special child**. Each non leaf node has exactly one **Special child**.

**Special Edge : **For each non-leaf node, the edge connecting the node with its **Special child** is considered as **Special Edge**.

**Read the next 3 paras until you clearly understand every line of it, every line makes sense (Hope!). Read it 2 times 3 times 10 times 2 power 10 times .. , until you understand!!**

What happens if you go to each node, find the special child and special edge and mark all special edges with green color and other edges are still black? Well, what happens is **HLD**. What would the graph look like then? Colorful yes. Not just colorful. Green edges actually forms vertex disjoint chains and black edges will be the connectors between chains. Let us explore one chain, start at root, move to the special child of root (there is only one special child, so easy pick), then to its special child and so on until you reach a leaf node, what you just traveled is a chain which starts at root node. Let us assume that root node has **m **child nodes. Note that all **m-1** normal child nodes are starting nodes of some other chains.

What happens if you move from a node to a normal child node of it. This is the **most important part**. When you move from a node to any of its normal child, the sub-tree size is at most half the sub-tree size of current node. Consider a node **X **whose sub-tree size is **s **and has **m** child nodes. If **m=1**, then the only child is special child (So there is no case of moving to normal child). For **m>=2**, sub-tree size of any normal child is **<=s/2**. To prove that, let us talk about the sub-tree size of special child. What is the least sub-tree size possible for special child? Answer is **ceil( s/m )** (what is ceil? click here). To prove it, let us assume it is less than **ceil( s/m )**. As this child is the one with maximum sub-tree size, all other normal child nodes will be at most as large as special child, **m **child nodes with each less than **ceil( s/m )** will not sum up to **s**, so with this counter-intuition. We have the following: The mininum sub-tree size possible for special child is **ceil( s/m )**. This being said, the maximum size for normal child is ** s/2. So when ever you move from a node to a normal child, the sub-tree size is at most half the sub-tree size of parent node.**

We stated early that to move from **root** to any node (or viceversa) we need to change at most **log N **chains. Here is the proof; Changing a chain means we are moving for a node to a normal child, so each time we change chain we are at least halving the sub-tree size. For a tree with size **N**, we can halve it at most **log N**times (Why? Well, take a number and keep halving, let me know if it takes more than **ceil( log N** ) steps).

At this point, we know what **HLD **is, we know why one has to use **HLD**, basic idea of **HLD**, terminology and proof. We shall now see implementation details of **HLD **and few related problems.

**Algorithm**

HLD(curNode, Chain):Add curNode to curChainIf curNode is LeafNode: return //Nothing left to dosc := child node with maximum sub-tree size //sc is the special childHLD(sc, Chain) //Extend current chain to special childfor each child node cn of curNode: //For normal childsif cn != sc: HLD(cn, newChain) //As told above, for each normal child, a new chain starts

Above algorithm correctly does **HLD**. But we will need bit more information when solving **HLD **related problems. We should be able to answer the following questions:

- Given a node, to which chain does that node belong to.
- Given a node, what is the position of that node in its chain.
- Given a chain, what is the head of the chain
- Given a chain, what is the length of the chain

So let us see a **C++** implementation which covers all of the above

int chainNo=0,chainHead[N],chainPos[N],chainInd[N],chainSize[N];void hld(int cur) {if(chainHead[chainNo] == -1) chainHead[chainNo]=cur;chainInd[cur] = chainNo;chainPos[cur] = chainSize[chainNo];chainSize[chainNo]++;int ind = -1,mai = -1;for(int i = 0; i < adj[cur].sz; i++) { if(subsize[ adj[cur][i] ] > mai) {mai = subsize[ adj[cur][i] ];ind = i;}}if(ind >= 0) hld( adj[cur][ind] );for(int i = 0; i < adj[cur].sz; i++) {if(i != ind) {chainNo++;hld( adj[cur][i] );}}}

Initially all entries of chainHead[] are set to -1. So in line 3 when ever a new chain is started, chain head is correctly assigned. As we add a new node to chain, we will note its position in the chain and increment the chain length. In the first for loop (lines 9-14) we find the child node which has maximum sub-tree size. The following if condition (line 16) is failed for leaf nodes. When the if condition passes, we expand the chain to special child. In the second for loop (lines 18-23) we recursively call the function on all normal nodes. chainNo++ ensures that we are creating a new chain for each normal child.

Problem : SPOJ – QTREE

Solution : Each edge has a number associated with it. Given 2 nodes **A **and **B**, we need to find the edge on path from **A **to **B** with maximum value. Clearly we can break the path into **A **to **LCA( A, B ) **and **B **to **LCA( A, B )**, calculate answer for each of them and take the maximum of both. As mentioned earlier as the tree need not be balanced, it may take upto **O( N )** to travel from **A **to **LCA( A, B )** and find the maximum. Let us use **HLD **as detailed above to solve the problem.

Solution Link : Github – Qtree.cpp (well commented solution). I will not explain all the functions of the solution. I will explain how query works in detail

**main()**: Scans the tree, calls all required functions in order.**dfs()**: Helper function. Sets up depth, subsize, parent of each node.**LCA()**: Returns Lowest Common Ancestor of two node**make_tree()**: Segment tree construction**update_tree()**: Segment tree update. Point Update**query_tree()**: Segment tree query. Range Update**HLD()**: Does HL-Decomposition, similar to one explained above**change()**: Performs change operation as given in problem statement

**query()** : We shall see in detail about the query function.

int query(int u, int v) {int lca = LCA(u, v);return max( query_up(u, lca), query_up(v, lca) );}

we calculate LCA(u, v). we call query_up function twice once for the path u to lca and again for the path v to lca. we take the maximum of those both as the answer.

**query_up() **: This is **important.** This function takes 2 nodes u, v such that v is ancestor of u. That means the path from u to v is like going up the tree. We shall see how it works.

int query_up(int u, int v) {int uchain, vchain = chainInd[v], ans = -1;while(1) {if(uchain == vchain) {int cur = query_tree(1, 0, ptr, posInBase[v]+1, posInBase[u]+1);if( cur > ans ) ans = cur;break;}int cur = query_tree(1, 0, ptr, posInBase[chainHead[uchain]], posInBase[u]+1);if( cur > ans ) ans = cur;u = chainHead[uchain];u = parent(u);}return ans;}

**uchain **and **vchain** are chain numbers in which **u **and **v **are present respectively. We have a while loop which goes on until we move up from **u **till **v**. We have 2 cases, one case is when both **u **and **v **belong to the same chain, in this case we can query for the range between **u **and **v**. We can stop our query at this point because we reached **v**.

Second case is when **u **and **v** are in different chains. Clearly **v **is above in the tree than **u**. So we need to completely move up the chain of **u** and go to next chain above **u**. We query from **chainHead[u]** to **u**, update the answer. Now we need to change chain. Next node after current chain is the parent of **chainHead[u]**.

Following image is the same tree we used above. I explained how query works on a path from **u **to **v**.

- SPOJ – QTREE – http://www.spoj.com/problems/QTREE/
- SPOJ – QTREE2 – http://www.spoj.com/problems/QTREE2/
- SPOJ – QTREE3 – http://www.spoj.com/problems/QTREE3/
- SPOJ – QTREE4 – http://www.spoj.com/problems/QTREE4/
- SPOJ – QTREE5 – http://www.spoj.com/problems/QTREE5/
- SPOJ – QTREE6 – http://www.spoj.com/problems/QTREE6/
- SPOJ – QTREE7 – http://www.spoj.com/problems/QTREE7/
- SPOJ – COT – http://www.spoj.com/problems/COT/
- SPOJ – COT2 – http://www.spoj.com/problems/COT2/
- SPOJ – COT3 – http://www.spoj.com/problems/COT3/
- SPOJ – GOT – http://www.spoj.com/problems/GOT/
- SPOJ – GRASSPLA – http://www.spoj.com/problems/GRASSPLA/
- SPOJ – GSS7 – http://www.spoj.com/problems/GSS7/
- CODECHEF – GERALD2 – http://www.codechef.com/problems/GERALD2
- CODECHEF – RRTREE – http://www.codechef.com/problems/RRTREE
- CODECHEF – QUERY – http://www.codechef.com/problems/QUERY
- CODECHEF – QTREE – http://www.codechef.com/problems/QTREE
- CODECHEF – DGCD – http://www.codechef.com/problems/DGCD
- CODECHEF – MONOPLOY – http://www.codechef.com/problems/MONOPLOY

**Editorials**:

- All codechef problems have Editorials.
**QTREE**: Explained above as example**QTREE2**: Segment trees this time should support sum operation over range. There is no update operation. Note that you can also use**BIT.**For finding the Kth element on a path, we can use**LCA**style jumping, it can be done in**O( log N )**.**QTREE3**: Simple. Segment tree should answer the following: index of left most black node in a range.**QTREE4**: You need the maximum distance. Clearly we can take the left most white node and right most white node. Distance between them will be the required answer. So our segment tree should be able to answer left most white node index as well as range sum.**QTREE5**: Bit hard. Read editorial of**QTREE6**to get idea. (Correct me if I am wrong, I did not solve it yet)**QTREE6**: Editorial – Click Here**QTREE7**: Hard. Read editorial of**MONOPOLY**and**GERALD2**.**MONOPOLY**: Editorial – Click Here**GERALD2**: Editorial – Click Here

Finallyyy!!! Long one!! When I was learning various algorithms and doing programming, the topic that bugged me the most was HLD, I did not understand it for a long time, also there was not any good tutorial. So I decided to start with HLD, I hope it will help a lot 🙂

This is my first attempt in writing a post/tutorial. Do share your views on it. Also I took a lot of time setting up this blog on EC2 (3 days, well I am new to this stuff), you can use contact me to write anything related to this blog, about cautions or bugs or will-do-good changes.

Share this post. Learn and let learn! 😀

Read more…(2701 words)

Category: Competitive Programming

Load More

Show admin stats