Skip to content
Projects
Groups
Snippets
Help
Loading...
Help
Support
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
N
nsoptim
Project
Project
Details
Activity
Releases
Cycle Analytics
Repository
Repository
Files
Commits
Branches
Tags
Contributors
Graph
Compare
Charts
Issues
0
Issues
0
List
Boards
Labels
Milestones
Merge Requests
0
Merge Requests
0
Wiki
Wiki
Snippets
Snippets
Members
Members
Collapse sidebar
Close sidebar
Activity
Graph
Charts
Create a new issue
Commits
Issue Boards
Open sidebar
David Kepplinger
nsoptim
Commits
dbb03b0d
Commit
dbb03b0d
authored
Oct 24, 2019
by
davidkep
Browse files
Options
Browse Files
Download
Email Patches
Plain Diff
update README
parent
4d4ebca9
Changes
2
Hide whitespace changes
Inline
Side-by-side
Showing
2 changed files
with
12 additions
and
105 deletions
+12
-105
.gitignore
.gitignore
+2
-0
README.md
README.md
+10
-105
No files found.
.gitignore
View file @
dbb03b0d
...
...
@@ -6,3 +6,5 @@
.Rbuildignore
.vscode
*.dSYM
inst/doc/_doxy_xml
inst/doc/_html
README.md
View file @
dbb03b0d
# nsoptim R Package
![
Languages: R/C++
](
https://img.shields.io/badge/language-R%2FC%2B%2B11-blue
)
![
Lifecycle: Experimental
](
https://img.shields.io/badge/lifecycle-experimental-orange
)
## Interfaces
A
**experimental**
C++ template library for non-smooth optimization.
The library is wrapped in an R package to facilitate its use in other R packages.
The goal of this library is to streamline the use of modern, fast algorithms for optimization of non-smooth functions.
### Loss Function
Loss functions which operate on the data-type
`Data`
must derive from the base class
`LossFunction<Data>`
.
Loss functions must implement a copy constructor which should be designed for fast and shallow copying of the loss.
Due to the early stage of the library, the interface might change considerably in the next few iterations.
Loss functions must support the following operations for one or more
`Coefficients`
types and the loss function's
data type
`Data`
.
The C++ header files are in
[
inst/include
](
inst/include
)
and can be used from within other R packages by adding
`nsoptim`
to the
`LinkingTo`
field in the package's DESCRIPTION file
.
#### `LossFunction(const LossFunction& other)`
Copy constructor operator, potentially creating a shallow copy of the
`other`
loss function.
This is called often and should be designed with speed in mind.
#### `LossFunction Clone()`
Create a deep copy of the loss function, i.e., should not share any data with this loss function, except for the
constant, underlying data.
#### `Coefficients ZeroCoefficients() const`
Create an appropriate
`Coefficients`
object which can be evaluated by this loss function, with all values set to zero.
#### `const Data& data() const`
Return a constant reference to the data the loss object operates on.
#### `double operator()(const Coefficients& where) const`
Evaluate the loss function at the coefficient values
`where`
.
Additionally, loss function can optionally implement the following methods:
#### `double Difference(const Coefficients& x, const Coefficients& y) const`
Compute the difference between the two
`Coefficients`
objects
`x`
and
`y`
.
### Penalty Function
Penalty functions must support evaluation of the penalty at one or more
`Coefficients`
types.
Loss functions must implement a copy constructor which should be designed for fast and shallow copying of the loss.
#### `PenaltyFunction(const PenaltyFunction& other)`
Copy constructor operator, potentially creating a shallow copy of the
`other`
penalty function.
This is called often and should be designed with speed in mind.
#### `double operator()(const Coefficients& where) const`
Evaluate the penalty at the coefficient values
`where`
and return the numeric value.
### Convex Functions or Convex Surrogates
Any loss or penalty function can define a method to create a convex surrogate at a particual value of the coefficients.
If a function
`Function`
derives from the type
`ConvexFunction<Function>`
, the convex surrogate is automatically
determined to be the function itself.
A non-convex function which wants to provide a convex surrogate must implement the following method for all supported
coefficient types
`Coefficients`
.
#### `const Function& ConvexSurrogate(const Coefficients& where) const`
Return a reference (or constant reference) to the convex surrogate of this function at the coefficients values
`where`
.
A convex surrogate function must be a convex function which is above this function everywhere and equals this
function at
`where`
.
In other words, if this function is denoted by _f_ and the convex surrogate is denoted by _g_, _g_ must be a convex
function such that _f(x) <= g(x)_ for all coefficients values
`x`
and _f(where) = g(where)_ at the given coefficients
values
`where`
.
Additionally, penalty function can optionally implement the following methods:
#### `double Difference(const Coefficients& x, const Coefficients& y) const`
Compute the difference between the two
`Coefficients`
objects
`x`
and
`y`
.
### Optimizer
Optimizer operate on the additive objective function _f(x; d) = l(x; d) + p(x)_ where _l_ is the loss function operating
on data _d_, and _p_ is the penalty function. These have the following types:
*
_l_ is of type
`LossFunction`
,
*
_p_ is of type
`PenaltyFunction`
,
*
and _x_ is of type
`Coefficients`
.
The type of _d_ is determined by the loss function _l_, nameley
`LossFunction::DataType`
.
An optimizer must implement the following methods.
#### `Optimizer::Optimum Optimize()` and `Optimizer::Optimum Optimize(const Coefficients& start)`
Optimize the current objective function, optionally starting at the coefficients value
`start`
.
#### `LossFunction& loss()`
Give access to the loss function currently in use. This allows the change of variable hyper-parameters which
don't change the loss in a significant way.
#### `void loss(const LossFunction& loss)`
Replace the current loss function with a new one.
#### `PenaltyFunction& penalty()`
Give access to the penalty function currently in use. This allows the change of variable hyper-parameters which
don't change the penalty in a significant way.
#### `void penalty(const Penalty& loss)`
Replace the current penalty function with a new one.
#### `Optimizer Clone()`
Clone the optimizer such that it operates on it's own clones of the loss and penalty functions and does not share
any state with this optimizer.
#### `Optimizer(const Optimizer& other)` and `Optimizer& (const Optimizer& other)`
Copy constructor/assignment operator to create a new optimizer which inherits the state of the
`other`
optimizer and
maybe share some internal data.
### Iterative Optimizer
Iterative optimizer should additionally provide a method to set the convergence tolerance to different levels.
Convergence is determined by the optimizer in different ways, but the optimizer should ensure that the convergence
criteria is comparable to the relative difference between two values of the coefficients as determined by the
loss function in use.
#### `void convergence_tolerance(const double tolerance)`
Set the convergence tolerance to
`tolerance`
.
## Documentation
The documentation for the library is a work in progress. Currently, source code files include doxygen-style comments.
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment