-> Client private data = X
-> Encrypt X to E(X)
-> Send E(X)s to server
-> Server trains F(E(X)) model
-> Decrypt F(E(X)) predictions to F(X) client side
An interesting proposition for retaining privacy with data analysis, using a form of encryption called homomorphic encryption.
But right now it is extremely limited. Only polynomial activation functions (up to a certain order) may be used, and on top of that a large amount of memory is required, and the entire prediction process takes a very long time. Much too computationally expensive for this to be applied at the scale the authors probably hoped. Might need more collaboration between machine learning and cryptography to figure out?