Welcome to the homepage of the

Mathematical Optimization Group

Department of Mathematics, University of Tübingen, Germany

PAC-Bayesian Learning of Optimization Algorithms

M. Sucker and P. Ochs

Abstract:
We apply the PAC-Bayes theory to the setting of learning-to-optimize. To the best of our knowledge, we present the first framework to learn optimization algorithms with provable generalization guarantees (PAC-bounds) and explicit trade-off between a high probability of convergence and a high convergence speed. Even in the limit case, where convergence is guaranteed, our learned optimization algorithms provably outperform related algorithms based on a (deterministic) worst-case analysis. Our results rely on PAC-Bayes bounds for general, unbounded loss-functions based on exponential families. By generalizing existing ideas, we reformulate the learning procedure into a one-dimensional minimization problem and study the possibility to find a global minimum, which enables the algorithmic realization of the learning procedure. As a proof-of-concept, we learn hyperparameters of standard optimization algorithms to empirically underline our theory.
pdf Bibtex arXiv
Latest update: 20.10.2022
Citation:
M. Sucker, P. Ochs:
PAC-Bayesian Learning of Optimization Algorithms. [pdf]
Technical Report, ArXiv e-prints, arXiv:2210.11113, 2022.
Bibtex:
@techreport{SO22,
  title        = {PAC-Bayesian Learning of Optimization Algorithms},
  author       = {M. Sucker and P. Ochs},
  year         = {2022},
  journal      = {ArXiv e-prints, arXiv:2210.11113},
}


MOP Group
©2017-2022
The author is not
responsible for
the content of
external pages.