Generalized penalty methods for convex optimization problems with pointwise inequality constraints

ao.Univ.-Prof. Dipl.-Ing. Dr. Helmut Gfrerer

May 12, 2009, 11:45 a.m. HS 14

Recently a number of research efforts focused on the development of numerical solution algorithms for PDE-constrained optimization problems subject to pointwise constraints on the optimization variables. Most of these approaches rely upon some constraint qualification which guarantee the existence of Lagrange multipliers. Such constraint qualifications often complicate the problem structure and are not useful for numerical purposes in many cases since the multiplier is only a measure.

To avoid this disadvantage we consider a class of general penalty methods which is applicable for a broad class of convex problems under very weak assumptions. The considered class of penalty methods covers both interior point method and the classical case when the penalty term is the squared norm of the residual. We show convergence of the primal variables and give also some error estimates. For the solution of the subproblems we use a Newton-type algorithm with line search which is globally and superlinearly convergent. However, for obtaining superlinear convergence some smoothing steps might be necessary.

We end the talk by a report on numerical experience with the proposed algorithm.