Traditional attention-base sequence-to-sequence models compute an attention vector for each step of the output decoder and use that to blend the individual context vectors of the input into a single, consolidated attention vector. This attention vector is used to compute a fixed size softmax.
In Pointer Nets, the normalized attention vector (over all the tokens in the input sequence) is normalized and treated as the softmax output over the input tokens.
So Pointer Net is a very simple modification of the attention model.
Any problem where the size of the output depends on the size of the input because of which fixed length softmax is ruled out.
eg combinatorial problems such as planar convex hull where the size of the output would depend on the size of the input.
The paper considers the following 3 problems:
Travelling Salesman Problem (TSP)
Since some of the problems are NP hard, the paper considers approximate solutions whereever the exact solutions are not feasible to compute.
The authors used the exact same architecture and model parameters of all the instances of the 3 problems to show the generality of the model.
The proosed Pointer Nets outperforms LSTMs and LSTMs with attention and can generalise quite well for much larger sequences.