Part of Advances in Neural Information Processing Systems 7 (NIPS 1994)
Michael Lemmon, Peter Szymanski
This paper presents an alternating minimization (AM) algorithm used in the training of radial basis function and linear regressor networks. The algorithm is a modification of a small-step interior point method used in solving primal linear programs. The algo(cid:173) rithm has a convergence rate of O( fo,L) iterations where n is a measure of the network size and L is a measure of the resulting solution's accuracy. Two results are presented that specify how aggressively the two steps of the AM may be pursued to ensure convergence of each step of the alternating minimization.