Commit ca9904ef authored by davidkep's avatar davidkep

Merge branch 'release/0.1'

parents 554d2e87 a927d604
......@@ -5,4 +5,6 @@
*.Rproj
.Rbuildignore
.vscode
*.dSYM
inst/doc/_doxy_xml
inst/doc/_html
......@@ -2,17 +2,16 @@ Package: nsoptim
Type: Package
Title: Utility Library to Implement Non-Smooth Optimization in C++
Version: 0.1.0
Date: 2019-01-30
Date: 2019-10-24
Authors@R: person("David", "Kepplinger", , "david.kepplinger@gmail.com",
role = c("aut", "cre"))
Copyright: See the file COPYRIGHTS for copyright details on some of the
functions and algorithms used.
Encoding: UTF-8
SystemRequirements: C++11
URL: https://github.com/dakep/nsoptim-rpkg
BugReports: https://github.com/dakep/nsoptim-rpkg/issues
Description: A C++ library to help solve non-smooth optimziation problems
in R packages.
URL: https://gitlab.math.ubc.ca/dakep/nsoptim
BugReports: https://gitlab.math.ubc.ca/dakep/nsoptim/issues
Description: A C++ template library for non-smooth optimziation.
Depends:
R (>= 3.4.0)
Imports:
......@@ -20,5 +19,7 @@ Imports:
LinkingTo:
Rcpp,
RcppArmadillo (>= 0.9.100)
License: GPL (>= 2)
Suggests:
testthat (>= 2.1.0)
License: MIT + file LICENSE
RoxygenNote: 6.0.1
MIT License
Copyright (c) 2019 David Kepplinger
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
YEAR: 2019
COPYRIGHT HOLDER: David Kepplinger
# nsoptim R Package
![Languages: R/C++](https://img.shields.io/badge/language-R%2FC%2B%2B11-blue)
![Lifecycle: Experimental](https://img.shields.io/badge/lifecycle-experimental-orange)
## Interfaces
A **experimental** C++ template library for non-smooth optimization.
The library is wrapped in an R package to facilitate its use in other R packages.
The goal of this library is to streamline the use of modern, fast algorithms for optimization of non-smooth functions.
### Loss Function
Loss functions which operate on the data-type `Data` must derive from the base class `LossFunction<Data>`.
Loss functions must implement a copy constructor which should be designed for fast and shallow copying of the loss.
Due to the early stage of the library, the interface might change considerably in the next few iterations.
Loss functions must support the following operations for one or more `Coefficients` types and the loss function's
data type `Data`.
The C++ header files are in [inst/include](inst/include) and can be used from within other R packages by adding `nsoptim`
to the `LinkingTo` field in the package's DESCRIPTION file.
#### `LossFunction(const LossFunction& other)` and `LossFunction& operator=(const LossFunction& other)`
Copy constructor/assignment operator, potentially creating a shallow copy of the `other` loss function.
This is called several times often and should be designed with speed in mind.
#### `LossFunction Clone()`
Create a deep copy of the loss function, i.e., should not share any data with this loss function, except for the
constant, underlying data.
#### `Coefficients ZeroCoefficients() const`
Create an appropriate `Coefficients` object which can be evaluated by this loss function, with all values set to zero.
#### `const Data& data() const`
Return a constant reference to the data the loss object operates on.
#### `double RelativeDifference(const Coefficients& x, const Coefficients& y) const`
Compute the relative difference between the two `Coefficients` objects `x` and `y`.
#### `double operator()(const Coefficients& where) const`
Evaluate the loss function at the coefficient values `where`.
### Penalty Function
Penalty functions must support evaluation of the penalty at one or more `Coefficients` types.
Loss functions must implement a copy constructor which should be designed for fast and shallow copying of the loss.
#### `double operator()(const Coefficients& where) const`
Evaluate the penalty at the coefficient values `where` and return the numeric value.
### Convex Functions or Convex Surrogates
Any loss or penalty function can define a method to create a convex surrogate at a particual value of the coefficients.
If a function `Function` derives from the type `ConvexFunction<Function>`, the convex surrogate is automatically
determined to be the function itself.
A non-convex function which wants to provide a convex surrogate must implement the following method for all supported
coefficient types `Coefficients`.
#### `const Function& ConvexSurrogate(const Coefficients& where) const`
Return a reference (or constant reference) to the convex surrogate of this function at the coefficients values `where`.
A convex surrogate function must be a convex function which is above this function everywhere and equals this
function at `where`.
In other words, if this function is denoted by _f_ and the convex surrogate is denoted by _g_, _g_ must be a convex
function such that _f(x) <= g(x)_ for all coefficients values `x` and _f(where) = g(where)_ at the given coefficients
values `where`.
### Optimizer
Optimizer operate on the additive objective function _f(x; d) = l(x; d) + p(x)_ where _l_ is the loss function operating
on data _d_, and _p_ is the penalty function. These have the following types:
* _l_ is of type `LossFunction`,
* _p_ is of type `PenaltyFunction`,
* and _x_ is of type `Coefficients`.
The type of _d_ is determined by the loss function _l_, nameley `LossFunction::DataType`.
An optimizer must implement the following methods.
#### `Optimizer::Optimum Optimize()` and `Optimizer::Optimum Optimize(const Coefficients& start)`
Optimize the current objective function, optionally starting at the coefficients value `start`.
#### `LossFunction& loss()`
Give access to the loss function currently in use. This allows the change of variable hyper-parameters which
don't change the loss in a significant way.
#### `void loss(const LossFunction& loss)`
Replace the current loss function with a new one.
#### `PenaltyFunction& penalty()`
Give access to the penalty function currently in use. This allows the change of variable hyper-parameters which
don't change the penalty in a significant way.
#### `void penalty(const Penalty& loss)`
Replace the current penalty function with a new one.
#### `Optimizer Clone()`
Clone the optimizer such that it operates on it's own clones of the loss and penalty functions and does not share
any state with this optimizer.
#### `Optimizer(const Optimizer& other)` and `Optimizer& (const Optimizer& other)`
Copy constructor/assignment operator to create a new optimizer which inherits the state of the `other` optimizer and
maybe share some internal data.
### Iterative Optimizer
Iterative optimizer should additionally provide a method to set the convergence tolerance to different levels.
Convergence is determined by the optimizer in different ways, but the optimizer should ensure that the convergence
criteria is comparable to the relative difference between two values of the coefficients as determined by the
loss function in use.
#### `void convergence_tolerance(const double tolerance)`
Set the convergence tolerance to `tolerance`.
## Documentation
The documentation for the library is a work in progress. Currently, source code files include doxygen-style comments.
......@@ -9,10 +9,16 @@
#ifndef NSOPTIM_CONFIG_HPP_
#define NSOPTIM_CONFIG_HPP_
#ifndef NSOPTIM_METRICS_ENABLED
#define NSOPTIM_METRICS_ENABLED_BOOL false
#else
#define NSOPTIM_METRICS_ENABLED_BOOL true
#define NSOPTIM_METRICS_LEVEL 2
#ifndef NSOPTIM_DETAILED_METRICS
# undef NSOPTIM_METRICS_LEVEL
# define NSOPTIM_METRICS_LEVEL 1
#endif
#ifdef NSOPTIM_METRICS_DISABLED
# undef NSOPTIM_METRICS_LEVEL
# define NSOPTIM_METRICS_LEVEL 0
#endif
#endif // NSOPTIM_CONFIG_HPP_
......@@ -49,15 +49,24 @@ class PredictorResponseData {
//!
//! @param indices the indicies of the observations to get.
//! @return the subset of the data with the requested observations.
inline PredictorResponseData observations(const arma::uvec& indices) const {
PredictorResponseData Observations(const arma::uvec& indices) const {
return PredictorResponseData(x_.rows(indices), y_.rows(indices));
}
//! Get a data set with the given observation removed.
//!
//! @param index the index of the observation to remove.
//! @return the subset of the data with the observation removed.
PredictorResponseData RemoveObservation(const arma::uword index) const {
return PredictorResponseData(arma::join_vert(x_.head_rows(index), x_.tail_rows(n_obs_ - index - 1)),
arma::join_vert(y_.head(index), y_.tail(n_obs_ - index - 1)));
}
//! Get a data set with the first `n_obs` observations of the data.
//!
//! @param n_obs number of observations to extract.
//! @return the subset of the data with the requested observations.
inline PredictorResponseData HeadRows(const arma::uword n_obs) const {
PredictorResponseData HeadRows(const arma::uword n_obs) const {
return PredictorResponseData(x_.head_rows(n_obs), y_.head_rows(n_obs));
}
......@@ -65,7 +74,7 @@ class PredictorResponseData {
//!
//! @param n_obs number of observations to extract.
//! @return the subset of the data with the requested rows.
inline PredictorResponseData TailRows(const arma::uword n_obs) const {
PredictorResponseData TailRows(const arma::uword n_obs) const {
return PredictorResponseData(x_.tail_rows(n_obs), y_.tail_rows(n_obs));
}
......@@ -73,7 +82,7 @@ class PredictorResponseData {
//! Only valid as long as the PredictorResponseData object is in scope.
//!
//! @return constant reference to the predictor matrix
inline const arma::mat& cx() const noexcept {
const arma::mat& cx() const noexcept {
return x_;
}
......@@ -81,7 +90,7 @@ class PredictorResponseData {
//! Only valid as long as the PredictorResponseData object is in scope.
//!
//! @return constant reference to the response vector
inline const arma::vec& cy() const noexcept {
const arma::vec& cy() const noexcept {
return y_;
}
......@@ -90,7 +99,7 @@ class PredictorResponseData {
//! Only valid as long as the PredictorResponseData object is in scope.
//!
//! @return reference to the predictor matrix
inline arma::mat& x() noexcept {
arma::mat& x() noexcept {
return x_;
}
......@@ -98,16 +107,16 @@ class PredictorResponseData {
//! Only valid as long as the PredictorResponseData object is in scope.
//!
//! @return reference to the response vector
inline arma::vec& y() noexcept {
arma::vec& y() noexcept {
return y_;
}
//! Get the number of observations in this data set.
inline arma::uword n_obs() const noexcept {
arma::uword n_obs() const noexcept {
return n_obs_;
}
//! Get the number of observations in this data set.
inline arma::uword n_pred() const noexcept {
arma::uword n_pred() const noexcept {
return n_pred_;
}
......
This diff is collapsed.
......@@ -20,7 +20,7 @@ namespace nsoptim {
//! The slope coefficients must be either of type `arma::vec` or `arma::sp_vec`.
template <class T>
class RegressionCoefficients {
static_assert(std::is_same<T, arma::vec>() || std::is_same<T, arma::sp_vec>(),
static_assert(std::is_same<T, arma::vec>::value || std::is_same<T, arma::sp_vec>::value,
"T must be a (sparse) vector.");
public:
......
......@@ -9,6 +9,8 @@
#ifndef NSOPTIM_OBJECTIVE_ADAPTIVE_EN_PENALTY_HPP_
#define NSOPTIM_OBJECTIVE_ADAPTIVE_EN_PENALTY_HPP_
#include <memory>
#include <nsoptim/armadillo.hpp>
#include <nsoptim/objective/convex.hpp>
#include <nsoptim/container/regression_coefficients.hpp>
......@@ -17,8 +19,8 @@
namespace nsoptim {
//! The adaptive elastic net penalty function defined as
//! $P(\beta; \lambda_1, \lambda_2, w) = \sum_{j=1}^p \lambda_1 w_j |\beta_j| + \lambda_2 \beta_j^2$
//! with a p-dimensional vector of penalty loadings, $w$, and hyper-parameters $\lambda_1$ and $\lambda_2$.
//! $P(\beta; \alpha, \lambda, w) = \lambda \sum_{j=1}^p \alpha w_j |\beta_j| + 0.5 * (1 - \alpha) \beta_j^2$
//! with a p-dimensional vector of penalty loadings, $w$, and hyper-parameters $\alpha$ and $\lambda$.
class AdaptiveEnPenalty : public PenaltyFunction, public ConvexFunction<AdaptiveEnPenalty> {
public:
//! Declare this penalty function as an EN penalty.
......@@ -27,10 +29,8 @@ class AdaptiveEnPenalty : public PenaltyFunction, public ConvexFunction<Adaptive
template<typename T>
using GradientType = typename T::SlopeCoefficient;
explicit AdaptiveEnPenalty(const arma::vec& loadings) noexcept : loadings_(loadings), lambda_1_(0), lambda_2_(0) {}
AdaptiveEnPenalty(const arma::vec& loadings, const double lambda_1, const double lambda_2) noexcept
: loadings_(loadings), lambda_1_(lambda_1), lambda_2_(lambda_2) {}
AdaptiveEnPenalty(std::shared_ptr<const arma::vec> loadings, const double alpha, const double lambda = 0) noexcept
: loadings_(loadings), alpha_(alpha), lambda_(lambda) {}
AdaptiveEnPenalty(const AdaptiveEnPenalty& other) = default;
AdaptiveEnPenalty& operator=(const AdaptiveEnPenalty& other) = default;
......@@ -47,32 +47,24 @@ class AdaptiveEnPenalty : public PenaltyFunction, public ConvexFunction<Adaptive
return AdaptiveEnPenalty(*this);
}
void lambda_1(const double lambda_1) noexcept {
lambda_1_ = lambda_1;
}
double lambda_1() const noexcept {
return lambda_1_;
void lambda(const double lambda) noexcept {
lambda_ = lambda;
}
void lambda_2(const double lambda_2) noexcept {
lambda_2_ = lambda_2;
double lambda() const noexcept {
return lambda_;
}
double lambda_2() const noexcept {
return lambda_2_;
void alpha(const double alpha) noexcept {
alpha_ = alpha;
}
double alpha() const noexcept {
return lambda_1_ / (lambda_1_ + 2 * lambda_2_);
}
double lambda() const noexcept {
return lambda_1_ + 2 * lambda_2_;
return alpha_;
}
const arma::vec& loadings() const noexcept {
return loadings_;
return *loadings_;
}
//! Evaluate the adaptive EN penalty at `where`.
......@@ -90,10 +82,11 @@ class AdaptiveEnPenalty : public PenaltyFunction, public ConvexFunction<Adaptive
//! @return penalty evaluated at `where`.
template<typename VectorType>
double Evaluate(const RegressionCoefficients<VectorType>& where) const {
if (loadings_.n_elem > 0) {
return lambda_1_ * arma::accu(loadings_ % arma::abs(where.beta)) + lambda_2_ * arma::dot(where.beta, where.beta);
if (loadings_->n_elem > 0) {
return lambda_ * (alpha_ * arma::accu(*loadings_ % arma::abs(where.beta)) +
0.5 * (1 - alpha_) * arma::dot(where.beta, where.beta));
}
return lambda_1_ * arma::norm(where.beta, 1) + lambda_2_ * arma::dot(where.beta, where.beta);
return lambda_ * (alpha_ * arma::norm(where.beta, 1) + 0.5 * (1 - alpha_) * arma::dot(where.beta, where.beta));
}
//! Evaluate the subgradient of the adaptive EN penalty at the given coefficient value.
......@@ -105,21 +98,21 @@ class AdaptiveEnPenalty : public PenaltyFunction, public ConvexFunction<Adaptive
template<typename T>
T Gradient(const RegressionCoefficients<T>& where) const {
// The gradient is computed only for the non-zero coefficients. The other elements are set to 0.
if (loadings_.n_elem > 0) {
return lambda_1_ * (loadings_ % arma::sign(where.beta)) + 2 * lambda_2_ * where.beta;
if (loadings_->n_elem > 0) {
return lambda_ * (alpha_ * (*loadings_ % arma::sign(where.beta)) + (1 - alpha_) * where.beta);
}
return lambda_1_ * arma::sign(where.beta) + 2 * lambda_2_ * where.beta;
return lambda_ * (alpha_ * arma::sign(where.beta) + (1 - alpha_) * where.beta);
}
private:
const arma::vec& loadings_;
double lambda_1_;
double lambda_2_;
std::shared_ptr<const arma::vec> loadings_;
double alpha_;
double lambda_;
};
//! The adaptive lasso penalty function defined as
//! $P(\beta; \lambda_1, w) = \sum_{j=1}^p \lambda_1 w_j |\beta_j|$
//! with a p-dimensional vector of penalty loadings, $w$, and hyper-parameter $\lambda_1$.
//! $P(\beta; \lambda, w) = \sum_{j=1}^p \lambda w_j |\beta_j|$
//! with a p-dimensional vector of penalty loadings, $w$, and hyper-parameter $\lambda$.
class AdaptiveLassoPenalty : public PenaltyFunction, public ConvexFunction<AdaptiveLassoPenalty> {
public:
//! Declare this penalty function as an EN penalty.
......@@ -128,10 +121,8 @@ class AdaptiveLassoPenalty : public PenaltyFunction, public ConvexFunction<Adapt
template<typename T>
using GradientType = typename T::SlopeCoefficient;
explicit AdaptiveLassoPenalty(const arma::vec& loadings) noexcept : loadings_(loadings), lambda_1_(0) {}
AdaptiveLassoPenalty(const arma::vec& loadings, const double lambda_1) noexcept
: loadings_(loadings), lambda_1_(lambda_1) {}
explicit AdaptiveLassoPenalty(std::shared_ptr<const arma::vec> loadings, const double lambda = 0) noexcept
: loadings_(loadings), lambda_(lambda) {}
AdaptiveLassoPenalty(const AdaptiveLassoPenalty& other) = default;
AdaptiveLassoPenalty& operator=(const AdaptiveLassoPenalty& other) = default;
......@@ -148,32 +139,26 @@ class AdaptiveLassoPenalty : public PenaltyFunction, public ConvexFunction<Adapt
return AdaptiveLassoPenalty(*this);
}
void lambda_1(const double lambda_1) noexcept {
lambda_1_ = lambda_1;
}
double lambda_1() const noexcept {
return lambda_1_;
}
double lambda_2() const noexcept {
return 0.;
}
void alpha(const double) const noexcept {}
double alpha() const noexcept {
return 1.;
}
void lambda(const double lambda) noexcept {
lambda_ = lambda;
}
double lambda() const noexcept {
return lambda_1_;
return lambda_;
}
const arma::vec& loadings() const noexcept {
return loadings_;
return *loadings_;
}
operator AdaptiveEnPenalty() const {
return AdaptiveEnPenalty(loadings_, lambda_1_, 0.);
return AdaptiveEnPenalty(loadings_, 0., lambda_);
}
//! Evaluate the adaptive EN penalty at `where`.
......@@ -191,10 +176,10 @@ class AdaptiveLassoPenalty : public PenaltyFunction, public ConvexFunction<Adapt
//! @return penalty evaluated at `where`.
template<typename T>
double Evaluate(const RegressionCoefficients<T>& where) const {
if (loadings_.n_elem > 0) {
return lambda_1_ * arma::accu(loadings_ % arma::abs(where.beta));
if (loadings_->n_elem > 0) {
return lambda_ * arma::accu(*loadings_ % arma::abs(where.beta));
}
return lambda_1_ * arma::norm(where.beta, 1);
return lambda_ * arma::norm(where.beta, 1);
}
//! Evaluate the subgradient of the adaptive EN penalty at the given coefficient value.
......@@ -206,15 +191,15 @@ class AdaptiveLassoPenalty : public PenaltyFunction, public ConvexFunction<Adapt
template<typename T>
T Gradient(const RegressionCoefficients<T>& where) const {
// The gradient is computed only for the non-zero coefficients. The other elements are set to 0.
if (loadings_.n_elem > 0) {
return lambda_1_ * (loadings_ % arma::sign(where.beta));
if (loadings_->n_elem > 0) {
return lambda_ * (*loadings_ % arma::sign(where.beta));
}
return lambda_1_ * arma::sign(where.beta);
return lambda_ * arma::sign(where.beta);
}
private:
const arma::vec& loadings_;
double lambda_1_;
std::shared_ptr<const arma::vec> loadings_;
double lambda_;
};
} // namespace nsoptim
......
......@@ -9,6 +9,8 @@
#ifndef NSOPTIM_OBJECTIVE_EN_PENALTY_HPP_
#define NSOPTIM_OBJECTIVE_EN_PENALTY_HPP_
#include <memory>
#include <nsoptim/armadillo.hpp>
#include <nsoptim/container/regression_coefficients.hpp>
#include <nsoptim/objective/penalty.hpp>
......@@ -45,7 +47,7 @@ class EnPenalty : public PenaltyFunction, public ConvexFunction<EnPenalty> {
}
operator AdaptiveEnPenalty() const {
return AdaptiveEnPenalty(arma::vec(), lambda_1(), lambda_2());
return AdaptiveEnPenalty(std::make_shared<const arma::vec>(), alpha_, lambda_);
}
void alpha(const double alpha) noexcept {
......@@ -64,14 +66,6 @@ class EnPenalty : public PenaltyFunction, public ConvexFunction<EnPenalty> {
return lambda_;
}
double lambda_1() const noexcept {
return alpha_ * lambda_;
}
double lambda_2() const noexcept {
return 0.5 * (1 - alpha_) * lambda_;
}
//! Evaluate the elastic net penalty at `where`.
//!
//! @param where point where to evaluate the penalty function.
......@@ -148,20 +142,12 @@ class LassoPenalty : public PenaltyFunction, public ConvexFunction<LassoPenalty>
return lambda_;
}
double lambda_1() const noexcept {
return lambda_;
}
double lambda_2() const noexcept {
return 0.;
}
operator AdaptiveEnPenalty() const {
return AdaptiveEnPenalty(arma::vec(), 1, lambda_);
return AdaptiveEnPenalty(std::make_shared<const arma::vec>(), 0., lambda_);
}
operator AdaptiveLassoPenalty() const {
return AdaptiveLassoPenalty(arma::vec(), lambda_);
return AdaptiveLassoPenalty(std::make_shared<const arma::vec>(), lambda_);
}
operator EnPenalty() const {
......@@ -243,16 +229,8 @@ class RidgePenalty : public PenaltyFunction, public ConvexFunction<RidgePenalty>
return lambda_;
}
double lambda_1() const noexcept {
return 0.;
}
double lambda_2() const noexcept {
return 0.5 * lambda_;
}
operator AdaptiveEnPenalty() const {
return AdaptiveEnPenalty(arma::vec(), 0, lambda_2());
return AdaptiveEnPenalty(std::make_shared<const arma::vec>(), 0, lambda_);
}
operator EnPenalty() const {
......@@ -294,4 +272,4 @@ class RidgePenalty : public PenaltyFunction, public ConvexFunction<RidgePenalty>
};
} // namespace nsoptim
#endif // NSOPTIM_PENALTY_EN_HPP_
#endif // NSOPTIM_OBJECTIVE_EN_PENALTY_HPP_
......@@ -13,15 +13,22 @@ namespace nsoptim {
//! Boilerplate base class for all loss functions.
//!
//! Loss functions must at least implement two methods:
//! `data` to give read access to the internal data, and
//! `operator()` to evaluate the loss at the given coefficients values.
//! Loss functions must at least the following methods:
//! `data()` to give read access to the internal data, and
//! `operator(where)` to evaluate the loss at the given coefficients values.
//! `ZeroCoefficients()` to obtain the 0-coefficient value.
//!
//! Loss functions can optionally also implement the following methods:
//! `Difference(a, b)` to evaluate the difference of two coefficient values.
//!
//! Loss functions should be easy and quick to copy and move. The main purpose is not to provide functionality but
//! context.
template<class Data>
class LossFunction {
public:
using DataType = Data;
//! Access the data the loss operates on.
//! Access the data the loss operates on.
//!
//! @return the data the loss operates on.
//! const Data& data() const;
......@@ -31,6 +38,17 @@ class LossFunction {
//! @param where where to evaluate the loss function.
//! @return the loss evaluated at the given coefficients.
//! double operator()(const Coefficients& where) const;
//! Get the zero coefficients for this loss type.
//!
//! @return zero coefficients.
// Coefficients ZeroCoefficients() const;
//! Get the difference between two sets of coefficients.
//!
//! @param x a set of regression coefficients.
//! @param y the other set of regression coefficients.
//! @return the relative difference between `x` and `y`.
// double Difference(const Coefficients& x, const Coefficients& y) const;
};
} // namespace nsoptim
......
This diff is collapsed.
......@@ -14,7 +14,10 @@ namespace nsoptim {
//! Boilerplate base class for all penalty functions.
//!
//! Penalty functions must at least implement the following method:
//! `operator()` to evaluate the penalty at the given coefficients values.
//! `operator(where)` to evaluate the penalty at the given coefficients values.
//!
//! Penalty functions can optionally also implement the following methods:
//! `Difference(a, b)` to evaluate the difference of two coefficient values.
class PenaltyFunction {
public:
//! Evaluate the penalty function.
......@@ -22,6 +25,13 @@ class PenaltyFunction {
//! @param where where to evaluate the penalty function.
//! @return the penalty evaluated at the given coefficients.
//! double operator()(const Coefficients& where) const;
//! Get the difference between two sets of coefficients.
//!
//! @param x a set of regression coefficients.
//! @param y the other set of regression coefficients.
//! @return the relative difference between `x` and `y`.
// double Difference(const Coefficients& x, const Coefficients& y) const;
};
} // namespace nsoptim
......
......@@ -16,5 +16,6 @@
#include <nsoptim/optimizer/augmented_ridge.hpp>
#include <nsoptim/optimizer/mm.hpp>
#include <nsoptim/optimizer/dal.hpp>
#include <nsoptim/optimizer/admm.hpp>
#endif // NSOPTIM_OPTIMIZER_HPP_
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
//
// optimizer_base.hpp
// nsoptim
//
// Created by David Kepplinger on 2019-01-02.
// Copyright © 2019 David Kepplinger. All rights reserved.
//
#ifndef NSOPTIM_OPTIMIZER_OPTIMIZER_BASE_HPP_
#define NSOPTIM_OPTIMIZER_OPTIMIZER_BASE_HPP_
#include <nsoptim/optimizer/optimum.hpp>
#include <nsoptim/traits/traits.hpp>
namespace nsoptim {
//! Base class for all optimizer using loss function type `T`, penalty function type `U` and coefficient type `V`.
//! This class checks whether `T` is a valid loss function for coefficient type `V` as well as if `U` is a valid
//! penalty function for coefficient type `V`.
template<typename T, typename U, typename V>
class Optimizer {
public:
using LossFunction = T;
using PenaltyFunction = U;
using Coefficients = V;
using Optimum = nsoptim::Optimum<LossFunction, PenaltyFunction, Coefficients>;
static_assert(traits::is_loss_function<LossFunction>::value,
"LossFunction does not implement the loss function interface");
static_assert(traits::is_penalty_function<PenaltyFunction>::value,
"PenaltyFunction does not implement the penalty function interface");
static_assert(traits::loss_supports_evaluation<LossFunction, Coefficients>::value,
"LossFunction does not support evaluation of the coefficients.");
static_assert(traits::can_evaluate<PenaltyFunction, Coefficients>::value,
"PenaltyFunction does not support evaluation of the coefficients.");
};
} // nsoptim
#endif // NSOPTIM_OPTIMIZER_OPTIMIZER_BASE_HPP_
This diff is collapsed.
This diff is collapsed.
......@@ -20,8 +20,8 @@ template<typename, typename>
static auto test_has_convex_surrogate(double) -> std::false_type;
template<typename T, typename U>
static auto test_has_convex_surrogate(int) -> sfinae_method_type<decltype(std::declval<T>().GetConvexSurrogate(std::declval<U>())),
typename T::ConvexSurrogateType>;
static auto test_has_convex_surrogate(int) -> sfinae_method_type<
decltype(std::declval<T>().GetConvexSurrogate(std::declval<U>())), typename T::ConvexSurrogateType>;
} // namespace internal
......
//
// has_difference_op.hpp
// nsoptim
//
// Created by David Kepplinger on 2018-01-26.