Class Brent

Start, improve and interrogate an optimisation of a scalar-argument function (i.e., 1D optimisation). If you are doing dimensional optimisation, you should use Simplex.

Like Simplex, creating an object does not perform the optimisation, but gives you an object that you can loop through yourself. Use fitBrent for a one-shot version.

The approach here comes from Brent (1976) - Algorithms for minimization without derivatives, and in particular the Algol code on p79-80 and Fortran code from netlib.

The description from the paper and code, updated with our names:

the method used is a combination of golden section search and successive parabolic interpolation. convergence is never much slower than that for a fibonacci search. if target has a continuous second derivative which is positive at the minimum (which is not at lower or upper), then convergence is superlinear, and usually of the order of about 1.324....

the function target is never evaluated at two points closer together than eps * abs(fmin) + (tolerance / 3), where eps is approximately the square root of the relative machine precision. if target is a unimodal function and the computed values of target are always unimodal when separated by at least eps * abs(x) + (tolerance / 3), then fmin approximates the abcissa of the global minimum of target on the interval (lower, upper) with an error less than 3 * eps * abs(fmin) + > tolerance. if target is not unimodal, then fmin may approximate a local, but perhaps non-global, minimum to the same accuracy.

Hierarchy

  • Brent

Constructors

Methods

Constructors

  • Parameters

    • target: TargetFn<number>

      The function to be minimised

    • lower: number

      The lower bound of the interval to search in

    • upper: number

      The upper bound of the interval to search in

    • control: Partial<BrentControlParam> = {}

      Control parameters for the optimisation

    Returns Brent

Methods

  • result(): { converged: boolean; data: any; evaluations: number; iterations: number; location: number; value: number }
  • Return information about the best found point so far.

    Returns { converged: boolean; data: any; evaluations: number; iterations: number; location: number; value: number }

    • converged: boolean

      Has the algorithm converged?

    • data: any

      Any additional data returned by the target function, for this point

    • evaluations: number

      The number of times that target has been called so far

    • iterations: number

      The number of times that step has been called so far

    • location: number

      The best found location

    • value: number

      The value of target(location)

  • run(maxIterations?: number): { converged: boolean; data: any; evaluations: number; iterations: number; location: number; value: number }
  • Helper function to run the algorithm until converged. This is very basic and not really intended to be used - you should probably build logic around step directly, or if you want a simple interface use the fitBrent function.

    Returns

    The same object as result. Note that the algorithm may not have converged if maxIterations is not Infinity, so you should check the .converged field.

    Parameters

    • maxIterations: number = Infinity

      The maximum number of iterations of the algorithm (calls to step to take. If we converge before hitting this number we will return early.

    Returns { converged: boolean; data: any; evaluations: number; iterations: number; location: number; value: number }

    • converged: boolean

      Has the algorithm converged?

    • data: any

      Any additional data returned by the target function, for this point

    • evaluations: number

      The number of times that target has been called so far

    • iterations: number

      The number of times that step has been called so far

    • location: number

      The best found location

    • value: number

      The value of target(location)

  • step(): boolean
  • Advance the optimiser one "step" of the algorithm. This will evaluate target once.

    Returns

    true if the algorithm has converged, false otherwise. For details about the best point so far, see result

    Returns boolean

Generated using TypeDoc