Nonlinear Analysis and Optimization Seminar[ Edit ]
Moderator: Simeon Reich
Composite convex optimization problems that include a low-rank promoting term have important applications in data and imaging sciences. However, such problems are highly challenging to solve in large-scale: the low-rank promoting term prohibits efficient implementations of proximal based methods and even simple subgradient methods are very limited. On the other hand, methods which are tailored for low-rank optimization, such as conditional gradient-type methods, are usually slow. Motivated by these drawbacks, we present new algorithms and complexity results for some optimization problems in this class. At the heart of our results is the idea of using low-rank SVD computations in every iteration. This talk is based on joint works with Dan Garber and Shoham Sabach.
Abstract: An important and challenging class of two-stage linear optimization problems are those without relative-complete recourse, wherein there exist first-stage decisions and realizations of the uncertainty for which there are no feasible second-stage decisions. Previous data-driven methods for these problems, such as the sample average approximation (SAA), are asymptotically optimal but have prohibitively poor performance with respect to out-of-sample feasibility. In this talk, we present a data-driven method for two-stage linear optimization problems without relative-complete recourse which combines (i) strong out-of-sample feasibility guarantees and (ii) general asymptotic optimality. Our method employs a simple robustification of the data combined with a scenario-wise approximation. A key contribution of this work is the development of novel geometric insights, which we use to show that the proposed approximation is asymptotically optimal. We demonstrate the benefit of using this method in practice through numerical experiments.