Dynamic programming is a fancy name for storing intermediate results and re-using the stored result instead of re-computing them each time.
A story will help. Suppose, one beautiful morning I come to you with a complicated puzzle and challenge you to solve it. You work really hard the entire day and tell me the solution that evening. The next day, I come to you and I ask you the exact same puzzle. But since the puzzle was so involved you have forgotten most of the calculations you did, you work them out again and tell me the solution. The third day, I come to you and I ask you the same exact puzzle again. This time you're really annoyed, and you decide to not throw away all your calculations and keep them handy so that if I bother you again, you can just show me your solution. The fourth day when I come back to you, and you go Ha! There's your solution. And I didn't have to think even for a second. And from that day after, I was never able to troub...
Yes. You can also use a set as a priority queue / heap, since a set supports strictly more functionality. I expect sets to be 2-5x slower than heaps / priority queues because it is maintaining a Balanced Binary Search Tree (BBST). 95% of the time, 2-5x slower doesn't matter.
A computer can only do a limited number of operations such as additions, multiplications, etc in a given time. Usually, this number is approximately a billion operations per second. Hence, if we want our algorithm to finish executing in a second or so, then we need to make sure that the total amount of work we are asking it to do is less than a billion operations. Example of operations are addition, multiplication, assigning a value to a variable, and so on.
Here is a tutorial on Computational Complexity by Michal Forišek, one of the organizers of IOI. I love the first couple of sections titled Why is it important? and What is efficiency? before he gets into the formal definitions. As he says, therationalebehindthedefinitionsismoreimportantthanthedefinitionsthemselves. You don't need to continue to section 2 of the tutorial, section 1 is sufficient.
Finally, once you understand the rationale behind asymptotic notation and computational complexity, here is a simple video explaining the concept in detail.
A sequence (array) is really just a function which associates integers (indices) with the corresponding values. However, there is no reason to restrict our usage of binary search to tangible sequences. In fact, we can use the same algorithm described above on any monotonic function f. ... The only difference is that we replace an array lookup with a function evaluation: we are now looking for some x such that f(x) is equal to the target value.