Commit c5a467c2 authored by davidkep's avatar davidkep

improve documentation of MM optimizer

parent 6a5994f9
......@@ -12,31 +12,49 @@ General Optimizer interface
``nsoptim::Optimizer``
======================
.. doxygenclass:: nsoptim::Optimizer
.. cpp:type:: nsoptim::Optimizer::LossFunction = T
.. cpp:class:: template<typename _LossFunction, typename _PenaltyFunction, typename _Coefficients> class Optimizer
The loss function type.
Base class for all optimizers. The only use of this class is that it ensures that ``_LossFunction`` is a valid loss function class, ``_PenaltyFunction`` is a valid penalty function class, and that both of them can handle coefficient
of type ``_Coefficients``.
The template parameters are made available as member types.
.. cpp:type:: nsoptim::Optimizer::PenaltyFunction = U
.. rubric:: Public Types
The penalty function type.
.. cpp:type:: LossFunction = _LossFunction
.. cpp:type:: nsoptim::Optimizer::Coefficients = V
Loss function type.
The coefficients type.
.. cpp:type:: PenaltyFunction = _PenaltyFunction
Penalty function type.
.. cpp:type:: Coefficients = _Coefficients
Coefficients type.
.. cpp:type:: Optimum = nsoptim::Optimum<LossFunction, PenaltyFunction, Coefficients>
Optimum type as returned by this optimizer.
************
MM Optimizer
************
The MM (minimization by majorization) algorithm is a meta-algorithm that can optimize a very general class of objective functions.
The algorithm works by successively minimizing a *surrogate* function which majorizes the true objective function at the current point :math:`x`.
The MM (minimization by majorization) algorithm is a "black-box" algorithm which can optimize a very general class of objective functions, without having explicit knowledge about the intricates of the loss and/or penalty function.
The MM optimzer only needs a loss and/or penalty function which provide a convex *surrogate* function.
The algorithm works by successively minimizing the convex *surrogate* function which majorizes the true objective function at the current point :math:`x`.
A function :math:`h` majorizes function :math:`f` at :math:`x` if
* :math:`h` is greater than :math:`f` everywhere, i.e., :math:`h(x') \geq f(x')` for all :math:`x'` in the domain, and
* the functions coincide at :math:`x`, i.e., :math:`h(x) = f(x)`.
If the loss and/or penalty function implement a method to get the convex surrogate.
Either the loss function, the penalty function, or both must implement a ``ConvexSurrogate`` method according to the interface described in :ref:`ref-optim-mm-convex-surrogate`.
The MM optimizer requires an *inner optimizer*, i.e., an optimizer which can optimize the convex surrogate objective function for the given coefficients type.
The MM optimizer has several configuration parameters that are set on construction by supplying a :cpp:class:`nsoptim::MMConfiguration` object.
.. _ref-optim-mm-convex-surrogate::
Convex Surrogate
================
......@@ -46,9 +64,6 @@ Convex Surrogate
Penalty functions can provide a similar member to return a convex surrogate.
The MM optimizer has several configuration parameters that are set on construction by supplying a :cpp:class:`nsoptim::MMConfiguration` object.
``nsoptim::MMConfiguration``
============================
.. doxygenstruct:: nsoptim::MMConfiguration
......
......@@ -918,16 +918,16 @@ class AdmmVarStepOptimizer : public Optimizer<LossFunction, PenaltyFunction, Coe
penalty_.reset(new PenaltyFunction(penalty));
}
//! Get the convergence tolerance for the MM algorithm.
//! Get the convergence tolerance for the ADMM algorithm.
//!
//! @return convergence tolerance.
double convergence_tolerance() const noexcept {
return convergence_tolerance_;
}
//! Set the convergence tolerance for the MM algorithm.
//! Set the convergence tolerance for the ADMM algorithm.
//!
//! @param convergence_tolerance convergene tolerance for the MM algorithm.
//! @param convergence_tolerance convergene tolerance for the ADMM algorithm.
void convergence_tolerance(double convergence_tolerance) noexcept {
convergence_tolerance_ = convergence_tolerance;
}
......
......@@ -27,9 +27,11 @@ struct MMConfiguration {
//! Type of tightening for inner optimization.
enum class TighteningType {
//! No tightening, i.e., always use the configured numeric tolerance for the inner optimization.
//! This is automatically chosen if the inner optimizer does not support chaning the numeric tolerance.
kNone = 0,
//! At each iteration make the inner optimization tighter by a constant factor, up until the minimum inner
//! Start with a large inner tolerance and at each iteration make the inner optimization more precise by
//! reducing the tolerance level of the inner optimizer by a constant factor, up until the minimum inner
//! tolerance level is reached.
kExponential = 1,
......
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment