The function to be minimised
The lower bound of the interval to search in
The upper bound of the interval to search in
Control parameters for the optimisation
Return information about the best found point so far.
Has the algorithm converged?
Any additional data returned by the target function, for this point
The number of times that target
has been called so far
The number of times that step has been called so far
The best found location
The value of target(location)
Helper function to run the algorithm until converged. This is very basic and not really intended to be used - you should probably build logic around step directly, or if you want a simple interface use the fitBrent function.
The same object as result. Note that the
algorithm may not have converged if maxIterations
is not
Infinity
, so you should check the .converged
field.
The maximum number of iterations of the algorithm (calls to step to take. If we converge before hitting this number we will return early.
Has the algorithm converged?
Any additional data returned by the target function, for this point
The number of times that target
has been called so far
The number of times that step has been called so far
The best found location
The value of target(location)
Advance the optimiser one "step" of the algorithm. This will
evaluate target
once.
true
if the algorithm has converged, false
otherwise. For details about the best point so far, see
result
Generated using TypeDoc
Start, improve and interrogate an optimisation of a scalar-argument function (i.e., 1D optimisation). If you are doing dimensional optimisation, you should use Simplex.
Like Simplex, creating an object does not perform the optimisation, but gives you an object that you can loop through yourself. Use fitBrent for a one-shot version.
The approach here comes from Brent (1976) - Algorithms for minimization without derivatives, and in particular the Algol code on p79-80 and Fortran code from netlib.
The description from the paper and code, updated with our names: