Artificial Neural Networks (Part 2) Gradient Descent Learning and Backpropagation
Christian Jacob
CPSC 533 — Winter 2001
Learning by Gradient Descent
Definition of the Learning Problem
Let us start with the simple case of linear cells, which we have introduced as percep- tron units. The linear network should learn mappings (for m = 1, …, P) between Ë an input pattern xm = Hx1
m, …, xN m L and