Event № 359
We consider the projected gradient (PG) method for nonnegative least squares (NNLS). The NNLS problem plays a key role in statistical learning theory in general and in Support Vector Machines (SVM) in particular. In contrast to active set and interior point methods, which for a long time were the main tools for solving NNLS, the PG does not require solving at each step a linear system of equations. It rather requires matrix by vector multiplication as the main operation per step. Therefore the critical issue is the convergence rate of the PG methods. We established convergence rates and complexity bounds for PG methods under various assumptions on the input data.