If you're reading Deep Learning Book, feel free to ask questions, discuss the material or share resources.
Below is a video from a book club in San Francisco, USA discussing this chapter. Presented by Ian Goodfellow.
I was about to ask the same question. Me and my colleague agree with you.
I don't think so - the indices taken alone can be arbitrary (i.e. he could have written a,b,c or x,y,z for the indices of V). However when we are taking sums we need to ensure that the indices match.
In other words, I can say that Vx,y,z represent the ...
Could someone explain what the last sentence on page 342 means:
"In linear algebra notation, we index into arrays using 1 for the first entry"
I'm rather confused, because say if I wanted to find...
In Chapter 7, Page 234, it says:
The new matrix is the same as the original one, but ...
Below is a video from a book club in San Francisco, USA discussing this chapter. Presented by Timothee Cour.
there is a very good explanation at StackExchange given by Gabrriel Romon Motivating sigmoid output units in neural networks starting with unnormalized log probabilities linear in $z=w^Th+b$ and $\phi(z)$
equation 6.22 can be rewritten into:
How to understand the equation 6.31 , and How does it derive from ?