Commit cf5516a4 authored by davidkep's avatar davidkep

add regression to current loss names

parent aede43e4
......@@ -19,13 +19,13 @@ to the `LinkingTo` field in the package's DESCRIPTION file.
#include <nsoptim.hpp> // This also includes the armadillo library
// Alias for a linearized ADMM optimizer operating on the standard LS regression loss and an EN penalty using a dense coefficient vector.
using LinearizedAdmmOptimizer = nsoptim::LinearizedAdmmOptimizer<nsoptim::LsLoss, nsoptim::EnPenalty, nsoptim::RegressionCoefficients<arma::vec>>;
using LinearizedAdmmOptimizer = nsoptim::LinearizedAdmmOptimizer<nsoptim::LsRegressionLoss, nsoptim::EnPenalty, nsoptim::RegressionCoefficients<arma::vec>>;
typename LinearizedAdmmOptimizer::Coefficients Foo() {
// Generate dummy data with 100 observations and 10 predictors.
auto data = std::make_shared<nsoptim::PredictorResponseData>(arma::randn(100, 10), arma::randn(10));
nsoptim::LsLoss loss(data); // Create a LS loss function object with the generated data
nsoptim::LsRegressionLoss loss(data); // Create a LS loss function object with the generated data
nsoptim::EnPenalty penalty1(0.5, 2.4) // Create an EN penalty function object with alpha=0.5 and lambda=2.4
nsoptim::EnPenalty penalty2(0.5, 1.5) // Create an EN penalty function object with alpha=0.5 and lambda=1.5
......
......@@ -11,3 +11,5 @@ nsoptim is a C++ template library for non-smooth optimization, building upon the
overview
optimizer
penalty_functions
loss_functions
#################
Loss Functions
#################
*********************************************************
Loss Functions for Regression Problems
*********************************************************
Loss functions for regression problems are of the general form
.. math::
l(\beta_0, \beta) = \rho (y - X \beta - \beta_0)
where :math:`\beta_0` is a scalar intercept and :math:`\beta` are the regression parameters.
Here, :math:`y` is the *response vector* with *n* elements and :math:`X` is the *predictor matrix* with *n* rows and *p* columns.
``nsoptim::LsRegressionLoss``
==============================
The classical least-squares (LS) loss function for regression, as defined by
.. math::
l(\beta_0, \beta) = \frac{1}{2 n} \| y - X \beta - \beta_0 \|_2^2
.. doxygenclass:: nsoptim::LsRegressionLoss
:members:
``nsoptim::WeightedLsRegressionLoss``
======================================
The classical weighted least-squares (LS) loss function for regression, as defined by
.. math::
l(\beta_0, \beta; w) = \frac{1}{2 n} \sum_{i=1}^n w_i (y_i - x_i' \beta - \beta_0)^2
with :math:`w` being a vector of non-negative weights of length *n*.
.. doxygenclass:: nsoptim::WeightedLsRegressionLoss
:members:
......@@ -54,10 +54,10 @@ MM Optimizer
The MM (minimization by majorization) algorithm is a "black-box" algorithm which can optimize a very general class of objective functions, without having explicit knowledge about the intricates of the loss and/or penalty function.
The MM optimzer only needs a loss and/or penalty function which provide a convex *surrogate* function.
The algorithm works by successively minimizing the convex *surrogate* function which majorizes the true objective function at the current point :math:`x`.
A function :math:`h` majorizes function :math:`f` at :math:`x` if
A function :math:`h` majorizes function :math:`f` at :math:`b` if
* :math:`h` is greater than :math:`f` everywhere, i.e., :math:`h(x') \geq f(x')` for all :math:`x'` in the domain, and
* the functions coincide at :math:`x`, i.e., :math:`h(x) = f(x)`.
* :math:`h` is greater than :math:`f` everywhere, i.e., :math:`h(b') \geq f(b')` for all :math:`b'` in the domain, and
* the functions coincide at :math:`b`, i.e., :math:`h(b) = f(b)`.
Either the loss function, the penalty function, or both must implement a ``ConvexSurrogate`` method according to the interface described in :ref:`ref-optim-mm-convex-surrogate`.
......@@ -92,15 +92,15 @@ Linearized Alternative Direction Method of Moments (ADMM) Optimizer
* Supported loss functions: :cpp:class:`LsRegressionLoss`, :cpp:class:`WeightedLsRegressionLoss`
* Supported penalty functions: :cpp:class:`EnPenalty`, :cpp:class:`AdaptiveEnPenalty`
Linearized ADMM works for objective functions that can be written as :math:`l(A x) + p(x)` and solves the problem
Linearized ADMM works for objective functions that can be written as :math:`l(X b) + p(b)` and solves the problem
.. math::
\operatorname*{arg\,min}_{z, x}\, l(z) + p(x)
\operatorname*{arg\,min}_{u, v}\, l(u) + p(v)
\quad\quad
\text{subject to }\; A x - z = 0
\text{subject to }\; X v - u = 0
Especially if :math:`A` is "wide" (i.e., has more columns than rows), the proximal operator for this problem is usually much quicker to compute than for :math:`\tilde l (x) = l(A x)`.
Especially if :math:`X` is "wide" (i.e., has more columns than rows), the proximal operator for this problem is usually much quicker to compute than for :math:`\tilde l (b) = l(A b)`.
More information on the properties of the algorithm can be found in :ref:`[1] <ref-attouch-2010>`.
The linearized ADMM algorithm requires a proper implementation of the proximal operator that can handle the given loss and penalty functions.
......@@ -148,7 +148,7 @@ Alternative Direction Method of Moments (ADMM) Optimizer with Variable Step-Size
* Supported loss functions: :cpp:class:`LsRegressionLoss`, :cpp:class:`WeightedLsRegressionLoss`
* Supported penalty functions: :cpp:class:`EnPenalty`, :cpp:class:`AdaptiveEnPenalty`
This implementation operates directly on the objective function :math:`l(x) + p(x)`, but adjusts the step size
This implementation operates directly on the objective function :math:`l(b) + p(b)`, but adjusts the step size
according to :ref:`[2] <ref-bartels-2017>`.
``nsoptim::AdmmVarStepConfiguration``
......
......@@ -5,10 +5,13 @@ Overview
The library porvides algorithms and utilities to optimize non-smooth functions, i.e., to solve
.. math::
\operatorname*{arg\,min}_{x} l(x) + \phi(x)
\operatorname*{arg\,min}_{\beta} l(\beta) + \phi(\beta)
where :math:`l(x)` is a smooth function (called *loss function*) and :math:`\phi(x)` is a non-smooth function (called *penalty function*).
The argument :math:`x` is the *coefficient*, and the result of any optimization is the *optimum*.
where :math:`l(\beta)` is a smooth function (called *loss function*) and :math:`\phi(\beta)` is a non-smooth function (called *penalty function*).
The argument :math:`\beta` is the *coefficient*, and the result of any optimization is the *optimum*.
All optimizers support one or more loss and penalty functions.
Static polymorphism through the extensive use of templating allows nsoptim to select the best version of an optimizer for the given combination of loss and penalty function.
************************************
Dependencies and System Requirements
......
#################
Penalty Functions
#################
*************************************
Non-Adaptive Elastic Net penalties
*************************************
Non-adaptive elastic net (EN) penalties are defined as
.. math::
\phi(\beta; \alpha, \lambda) = \lambda \left( \alpha \| \beta \|_1 + \frac{1 - \alpha}{2} \| \beta \|_2^2 \right)
where :math:`0 \leq \alpha \leq 1` and :math:`0 < \lambda` are two hyper-parameters.
Because the boundary values of :math:`\alpha` allow for specific optimizations, the library provides dedicated classes for these special cases to enable compile-time optimizations.
``nsoptim::EnPenalty``
======================
The most generic EN penalty.
.. doxygenclass:: nsoptim::EnPenalty
:members:
``nsoptim::LassoPenalty``
=========================
An EN penalty with :math:`\alpha` fixed at 1.
Can be cast to a generic EN penalty if desired.
.. doxygenclass:: nsoptim::LassoPenalty
:members:
``nsoptim::RidgePenalty``
=========================
An EN penalty with :math:`\alpha` fixed at 0.
Can be cast to a generic EN penalty if desired.
.. doxygenclass:: nsoptim::RidgePenalty
:members:
*************************************
Adaptive Elastic Net penalties
*************************************
Adaptive EN penalties are of the form
.. math::
\phi(\beta; w, \alpha, \lambda) = \lambda \left( \alpha \sum_{j = 1}^p w_j \beta_j + \frac{1 - \alpha}{2} \| \beta \|_2^2 \right)
where :math:`w` is the vector of positive penalty loadings.
The other parameters are the same as for regular EN penalties.
**It is important to note that the penalty loadings are only applied to the L1 part of the penalty!**
Therefore, there is no "adaptive Rdige penalty".
``nsoptim::AdaptiveEnPenalty``
==============================
The most generic adaptive EN penalty.
.. doxygenclass:: nsoptim::AdaptiveEnPenalty
:members:
``nsoptim::AdaptiveLassoPenalty``
==================================
An adaptive EN penalty with :math:`\alpha` fixed at 1.
Can be cast to a generic adaptive EN penalty if desired.
.. doxygenclass:: nsoptim::AdaptiveLassoPenalty
:members:
......@@ -10,7 +10,7 @@
#define NSOPTIM_OBJECTIVE_HPP_
// Loss functions
#include <nsoptim/objective/ls_loss.hpp>
#include <nsoptim/objective/ls_regression_loss.hpp>
// Penalty functions
#include <nsoptim/objective/en_penalty.hpp>
......
......@@ -18,51 +18,73 @@
namespace nsoptim {
//! The adaptive elastic net penalty function defined as
//! $P(\beta; \alpha, \lambda, w) = \lambda \sum_{j=1}^p \alpha w_j |\beta_j| + 0.5 * (1 - \alpha) \beta_j^2$
//! with a p-dimensional vector of penalty loadings, $w$, and hyper-parameters $\alpha$ and $\lambda$.
//! The adaptive elastic net penalty function with hyper-parameters *alpha*, *lambda*, and
//! non-negative penalty loadings.
class AdaptiveEnPenalty : public PenaltyFunction, public ConvexFunction<AdaptiveEnPenalty> {
public:
//! Declare this penalty function as an EN penalty.
struct is_en_penalty_tag {};
//! Type of the gradient if evaluated with coefficients of ``T``.
template<typename T>
using GradientType = typename T::SlopeCoefficient;
//! Initialize an adaptive EN penalty.
//!
//! @param loadings A shared pointer to a constant vector of penalty loadings.
//! @param alpha Value of the *alpha* hyper-parameter.
//! @param lambda Value of the *lambda* hyper-parameter. Default is 0.
AdaptiveEnPenalty(std::shared_ptr<const arma::vec> loadings, const double alpha, const double lambda = 0) noexcept
: loadings_(loadings), alpha_(alpha), lambda_(lambda) {}
//! Default copy constructor.
//! The copied penalty will **share the pointer to the penalty loadings**!
AdaptiveEnPenalty(const AdaptiveEnPenalty& other) = default;
//! Default copy assignment operator.
//! The copied penalty will **share the pointer to the penalty loadings**!
AdaptiveEnPenalty& operator=(const AdaptiveEnPenalty& other) = default;
//! Default move operator.
//! The copied penalty will **share the pointer to the penalty loadings**!
AdaptiveEnPenalty(AdaptiveEnPenalty&& other) = default;
//! Default move assignment.
//! The copied penalty will **share the pointer to the penalty loadings**!
AdaptiveEnPenalty& operator=(AdaptiveEnPenalty&& other) = default;
~AdaptiveEnPenalty() = default;
//! Set the *lambda* hyper-parameter.
//!
//! @param lambda New *lambda* value.
void lambda(const double lambda) noexcept {
lambda_ = lambda;
}
//! Get the value of the *lambda* hyper-parameter.
double lambda() const noexcept {
return lambda_;
}
//! Set the *alpha* hyper-parameter.
//!
//! @param alpha New *alpha* value.
void alpha(const double alpha) noexcept {
alpha_ = alpha;
}
//! Get the value of the *alpha* hyper-parameter.
double alpha() const noexcept {
return alpha_;
}
//! Get a constant reference to the vector of penalty loadings.
const arma::vec& loadings() const noexcept {
return *loadings_;
}
//! Evaluate the adaptive EN penalty at `where`.
//!
//! @param where point where to evaluate the penalty function.
//! @return penalty evaluated at `where`.
//! @param where Point where to evaluate the penalty function.
//! @return penalty evaluated at *where*.
template<typename T>
double operator()(const RegressionCoefficients<T>& where) const {
return Evaluate(where);
......@@ -70,7 +92,7 @@ class AdaptiveEnPenalty : public PenaltyFunction, public ConvexFunction<Adaptive
//! Evaluate the adaptive EN penalty at `where`.
//!
//! @param where point where to evaluate the penalty function.
//! @param where Point where to evaluate the penalty function.
//! @return penalty evaluated at `where`.
template<typename VectorType>
double Evaluate(const RegressionCoefficients<VectorType>& where) const {
......@@ -85,8 +107,7 @@ class AdaptiveEnPenalty : public PenaltyFunction, public ConvexFunction<Adaptive
//!
//! Elements of the slope that are 0 will be set to 0 in the gradient.
//!
//! @param coefs the coefficients where the subgradient should be evaluated.
//! @param gradient a pointer to where the subgradient should be stored at.
//! @param where Coefficients where the subgradient should be evaluated.
template<typename T>
T Gradient(const RegressionCoefficients<T>& where) const {
// The gradient is computed only for the non-zero coefficients. The other elements are set to 0.
......@@ -102,62 +123,81 @@ class AdaptiveEnPenalty : public PenaltyFunction, public ConvexFunction<Adaptive
double lambda_;
};
//! The adaptive lasso penalty function defined as
//! $P(\beta; \lambda, w) = \sum_{j=1}^p \lambda w_j |\beta_j|$
//! with a p-dimensional vector of penalty loadings, $w$, and hyper-parameter $\lambda$.
//! The adaptive lasso penalty with hyper-parmaeter *lambda* and non-negative penalty loadings.
class AdaptiveLassoPenalty : public PenaltyFunction, public ConvexFunction<AdaptiveLassoPenalty> {
public:
//! Declare this penalty function as an EN penalty.
struct is_en_penalty_tag {};
//! Type of the gradient if evaluated with coefficients of type ``T``.
template<typename T>
using GradientType = typename T::SlopeCoefficient;
//! Initialize an adaptive LASSO penalty.
//!
//! @param loadings A shared pointer to a constant vector of penalty loadings.
//! @param lambda Value of the *lambda* hyper-parameter. Default is 0.
explicit AdaptiveLassoPenalty(std::shared_ptr<const arma::vec> loadings, const double lambda = 0) noexcept
: loadings_(loadings), lambda_(lambda) {}
//! Default copy constructor.
//! The copied penalty will **share the pointer to the penalty loadings**!
AdaptiveLassoPenalty(const AdaptiveLassoPenalty& other) = default;
//! Default copy assignment operator.
//! The copied penalty will **share the pointer to the penalty loadings**!
AdaptiveLassoPenalty& operator=(const AdaptiveLassoPenalty& other) = default;
//! Default move operator.
//! The copied penalty will **share the pointer to the penalty loadings**!
AdaptiveLassoPenalty(AdaptiveLassoPenalty&& other) = default;
//! Default move assignment.
//! The copied penalty will **share the pointer to the penalty loadings**!
AdaptiveLassoPenalty& operator=(AdaptiveLassoPenalty&& other) = default;
~AdaptiveLassoPenalty() = default;
//! Set the *alpha* hyper-parameter. For adaptive LASSO penalties, this has no effect.
void alpha(const double) const noexcept {}
//! Get the value of the *alpha* hyper-parameter, which is always 1.
double alpha() const noexcept {
return 1.;
}
//! Set the *lambda* hyper-parameter.
//!
//! @param lambda New *lambda* value.
void lambda(const double lambda) noexcept {
lambda_ = lambda;
}
//! Get the value of the *lambda* hyper-parameter.
double lambda() const noexcept {
return lambda_;
}
//! Get a constant reference to the vector of penalty loadings.
const arma::vec& loadings() const noexcept {
return *loadings_;
}
//! Cast the adaptive LASSO penalty to an adaptive EN penalty, with *alpha* set to 1.
operator AdaptiveEnPenalty() const {
return AdaptiveEnPenalty(loadings_, 0., lambda_);
}
//! Evaluate the adaptive EN penalty at `where`.
//! Evaluate the adaptive LASSO penalty at `where`.
//!
//! @param where point where to evaluate the penalty function.
//! @return penalty evaluated at `where`.
//! @param where Point where to evaluate the penalty function.
//! @return penalty evaluated at *where*.
template<typename T>
double operator()(const RegressionCoefficients<T>& where) const {
return Evaluate(where);
}
//! Evaluate the adaptive EN penalty at `where`.
//! Evaluate the adaptive LASSO penalty at `where`.
//!
//! @param where point where to evaluate the penalty function.
//! @return penalty evaluated at `where`.
//! @param where Point where to evaluate the penalty function.
//! @return penalty evaluated at *where*.
template<typename T>
double Evaluate(const RegressionCoefficients<T>& where) const {
if (loadings_->n_elem > 0) {
......@@ -166,12 +206,11 @@ class AdaptiveLassoPenalty : public PenaltyFunction, public ConvexFunction<Adapt
return lambda_ * arma::norm(where.beta, 1);
}
//! Evaluate the subgradient of the adaptive EN penalty at the given coefficient value.
//! Evaluate the subgradient of the adaptive LASSO penalty at the given coefficient value.
//!
//! Elements of the slope that are 0 will be set to 0 in the gradient.
//!
//! @param coefs the coefficients where the subgradient should be evaluated.
//! @param gradient a pointer to where the subgradient should be stored at.
//! @param where Coefficients where the subgradient should be evaluated.
template<typename T>
T Gradient(const RegressionCoefficients<T>& where) const {
// The gradient is computed only for the non-zero coefficients. The other elements are set to 0.
......
......@@ -19,7 +19,7 @@
#include <nsoptim/container/data.hpp>
#include <nsoptim/optimizer/optimizer_base.hpp>
#include <nsoptim/optimizer/optimum.hpp>
#include <nsoptim/objective/ls_loss.hpp>
#include <nsoptim/objective/ls_regression_loss.hpp>
#include <nsoptim/objective/en_penalty.hpp>
#include <nsoptim/traits/traits.hpp>
#include <nsoptim/optimizer/soft_threshold.hpp>
......@@ -159,7 +159,7 @@ inline bool SolveChol(const arma::mat& chol, arma::vec * const b) {
//! Proximal operator for the unweighted LS loss.
class LsProximalOperator {
public:
using LossFunction = LsLoss;
using LossFunction = LsRegressionLoss;
//! Initialize the proximal operator with fixed step size `1 / tau`.
//!
......@@ -170,7 +170,7 @@ class LsProximalOperator {
//!
//! @param loss the LS-loss for optimization. The object retains only a reference to the loss, so it is
//! the user's responsibility to not use the object after the loss is removed!
inline void loss(LsLoss* loss) noexcept {
inline void loss(LsRegressionLoss* loss) noexcept {
loss_ = loss;
}
......@@ -277,13 +277,13 @@ class LsProximalOperator {
private:
double config_tau_;
LsLoss * loss_;
LsRegressionLoss * loss_;
};
//! Proximal operator for the weighted LS loss.
class WeightedLsProximalOperator {
public:
using LossFunction = WeightedLsLoss;
using LossFunction = WeightedLsRegressionLoss;
//! Initialize the proximal operator with fixed step size `1 / tau`.
//!
......@@ -294,7 +294,7 @@ class WeightedLsProximalOperator {
//!
//! @param loss the LS-loss for optimization. The object retains only a reference to the loss, so it is
//! the user's responsibility to not use the object after the loss is removed!
inline void loss(WeightedLsLoss* loss) noexcept {
inline void loss(WeightedLsRegressionLoss* loss) noexcept {
loss_ = loss;
weights_ = loss_->weights();
}
......@@ -400,17 +400,17 @@ class WeightedLsProximalOperator {
private:
double config_tau_;
WeightedLsLoss * loss_;
WeightedLsRegressionLoss * loss_;
arma::vec weights_;
};
namespace admm_optimizer {
//! Type trait mapping the LsLoss to the LsProximalOperator, the WeightedLsLoss to the WeightedLsProximalOperator,
//! Type trait mapping the LsRegressionLoss to the LsProximalOperator, the WeightedLsRegressionLoss to the WeightedLsProximalOperator,
//! and any other type to itself.
template <typename T>
using ProximalOperator = typename std::conditional<
std::is_same<T, LsLoss>::value, LsProximalOperator,
typename std::conditional<std::is_same<T, WeightedLsLoss>::value,
std::is_same<T, LsRegressionLoss>::value, LsProximalOperator,
typename std::conditional<std::is_same<T, WeightedLsRegressionLoss>::value,
WeightedLsProximalOperator, T>::type >::type;
} // namespace admm_optimizer
......@@ -820,7 +820,7 @@ class AdmmVarStepOptimizer : public Optimizer<LossFunction, PenaltyFunction, Coe
// using Weights = typename std::conditional<IsWeightedTag::value, arma::vec, double>::type;
static_assert(traits::is_en_penalty<PenaltyFunction>::value, "PenaltyFunction must be an EN-type penalty.");
static_assert(traits::is_ls_loss<LossFunction>::value, "LossFunction must be an least-squares-type loss.");
static_assert(traits::is_ls_regression_loss<LossFunction>::value, "LossFunction must be an least-squares-type loss.");
// ADMM state structure
struct State {
......
......@@ -17,7 +17,7 @@
#include <nsoptim/container/data.hpp>
#include <nsoptim/optimizer/optimizer_base.hpp>
#include <nsoptim/optimizer/optimum.hpp>
#include <nsoptim/objective/ls_loss.hpp>
#include <nsoptim/objective/ls_regression_loss.hpp>
#include <nsoptim/objective/en_penalty.hpp>
#include <nsoptim/traits/traits.hpp>
......@@ -25,12 +25,12 @@ namespace nsoptim {
//! Compute the Ridge regression estimate using standard linear algebra on the augmented response vector and predictor
//! matrix.
template<class LossFunction = WeightedLsLoss>
template<class LossFunction = WeightedLsRegressionLoss>
class AugmentedRidgeOptimizer : public Optimizer<LossFunction, RidgePenalty, RegressionCoefficients<arma::vec>> {
using Base = Optimizer<LossFunction, RidgePenalty, RegressionCoefficients<arma::vec>>;
using LossFunctionPtr = std::unique_ptr<LossFunction>;
using RidgePenaltyPtr = std::unique_ptr<RidgePenalty>;
static_assert(traits::is_ls_loss<LossFunction>::value, "LossFunction must be a LS-type loss.");
static_assert(traits::is_ls_regression_loss<LossFunction>::value, "LossFunction must be a LS-type loss.");
public:
using Coefficients = typename Base::Coefficients;
......
......@@ -18,7 +18,7 @@
#include <nsoptim/traits/traits.hpp>
#include <nsoptim/optimizer/optimizer_base.hpp>
#include <nsoptim/optimizer/optimum.hpp>
#include <nsoptim/objective/ls_loss.hpp>
#include <nsoptim/objective/ls_regression_loss.hpp>
#include <nsoptim/container/metrics.hpp>
#include <nsoptim/container/data.hpp>
#include <nsoptim/container/regression_coefficients.hpp>
......@@ -54,7 +54,7 @@ class DalEnOptimizer : public Optimizer<LossFunction, PenaltyFunction, Regressio
using PenaltyFunctionPtr = std::unique_ptr<PenaltyFunction>;
static_assert(traits::is_en_penalty<PenaltyFunction>::value, "PenaltyFunction must be an EN-type penalty.");
static_assert(traits::is_ls_loss<LossFunction>::value, "LossFunction must be an least-squares-type loss.");
static_assert(traits::is_ls_regression_loss<LossFunction>::value, "LossFunction must be an least-squares-type loss.");
public:
using Coefficients = typename Base::Coefficients;
......
......@@ -15,7 +15,7 @@
#include <nsoptim/armadillo.hpp>
#include <nsoptim/traits/traits.hpp>
#include <nsoptim/objective/ls_loss.hpp>
#include <nsoptim/objective/ls_regression_loss.hpp>
#include <nsoptim/container/metrics.hpp>
#include <nsoptim/container/data.hpp>
......
//
// is_ls_loss.hpp
// is_ls_regression_loss.hpp
// nsoptim
//
// Created by David Kepplinger on 2018-01-26.
// Copyright © 2019 David Kepplinger. All rights reserved.
//
#ifndef NSOPTIM_TRAITS_IS_LS_LOSS_HPP_
#define NSOPTIM_TRAITS_IS_LS_LOSS_HPP_
#ifndef NSOPTIM_TRAITS_IS_LS_REGRESSION_LOSS_HPP_
#define NSOPTIM_TRAITS_IS_LS_REGRESSION_LOSS_HPP_
#include <utility>
#include <nsoptim/traits/sfinae_types.hpp>
#include <nsoptim/traits/is_loss_function.hpp>
namespace nsoptim {
namespace traits {
namespace internal {
template<typename T>
static auto test_is_ls_loss(sfinae_type_wrapper<typename T::is_ls_loss_tag>*) -> std::true_type;
static auto test_is_ls_regression_loss(sfinae_type_wrapper<typename T::is_ls_regression_loss_tag>*) -> std::true_type;
template<typename>
static auto test_is_ls_loss(...) -> std::false_type;
static auto test_is_ls_regression_loss(...) -> std::false_type;
} // namespace internal
//! Type trait to identify a penalty function as "elastic net"-like.
// template<typename T>
// struct is_ls_regression_loss : decltype(internal::test_is_ls_regression_loss<T>(0)) {};
template<typename T>
struct is_ls_loss : decltype(internal::test_is_ls_loss<T>(0)) {};
struct is_ls_regression_loss : internal::tf_switch<decltype(internal::test_is_ls_regression_loss<T>(0))::value &&
is_loss_function<T>::value>::type {};
} // namespace traits
} // namespace nsoptim
#endif // NSOPTIM_TRAITS_IS_LS_LOSS_HPP_
#endif // NSOPTIM_TRAITS_IS_LS_REGRESSION_LOSS_HPP_
......@@ -17,7 +17,7 @@
#include <nsoptim/traits/is_en_penalty.hpp>
#include <nsoptim/traits/is_iterative_algorithm.hpp>
#include <nsoptim/traits/is_loss_function.hpp>
#include <nsoptim/traits/is_ls_loss.hpp>
#include <nsoptim/traits/is_ls_regression_loss.hpp>
#include <nsoptim/traits/is_penalty_function.hpp>
#include <nsoptim/traits/is_weighted.hpp>
#include <nsoptim/traits/has_difference_op.hpp>
......
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment