SciPy, optimization

SciPy, optimization

SciPy (inodudzwa sai pie) ndeyemasvomhu application package yakavakirwa paNumpy Python yekuwedzera. NeSciPy, yako inopindirana yePython chikamu inova yakafanana yakazara data sainzi uye yakaoma system prototyping nharaunda seMATLAB, IDL, Octave, R-Lab, uye SciLab. Nhasi ndinoda kutaura muchidimbu nezve maitiro ekushandisa mamwe anozivikanwa optimization algorithms mu scipy.optimize package. Rubatsiro rwakadzama uye rwemazuva ano rwekushandisa mabasa runogona kuwanikwa uchishandisa rubatsiro() kuraira kana kushandisa Shift+Tab.

Nhanganyaya

Kuti uzviponese iwe uye vaverengi kubva pakutsvaga nekuverenga zvinyorwa zvekutanga, zvinongedzo kune tsananguro yenzira zvichanyanya paWikipedia. Sezvo mutemo, ruzivo urwu rwakakwana kuti unzwisise nzira muhuwandu hwemashoko uye mamiriro ekushandiswa kwavo. Kuti unzwisise kukosha kwemaitiro emasvomhu, tevera zvinongedzo kune mamwe mabhuku ane chiremera, anogona kuwanikwa pakupera kwechinyorwa chimwe nechimwe kana mune yako yaunofarira yekutsvaga injini.

Saka, iyo scipy.optimize module inosanganisira kuita kweanotevera maitiro:

  1. Mamiriro uye asina magumo kuderedzwa kwe scalar mabasa ezvakati wandei (minim) uchishandisa akasiyana algorithms (Nelder-Mead simplex, BFGS, Newton conjugate gradients, COBYLA ΠΈ SLSQP)
  2. Global optimization (semuenzaniso: basinhopping, diff_evolution)
  3. Kuderedza zvisaririra MNC (zvishoma_zvikwere) uye curve inokodzera algorithms uchishandisa asiri mutsetse madiki masikweya (curve_fit)
  4. Kuderedza scalar mabasa eimwe shanduko (minim_scalar) uye kutsvaga midzi (root_scalar)
  5. Multidimensional solvers ye system ye equations (mudzi) uchishandisa akasiyana algorithms (hybrid Powell, Levenberg-Marquardt kana mitoo mikuru yakadai se Newton-Krylov).

Muchikamu chino tichatarisa chete chinhu chekutanga kubva pane iyi rondedzero yese.

Unconditional minimization ye scalar function yezvakasiyana zvakasiyana

Iyo minim basa kubva ku scipy.optimize package inopa yakajairika interface yekugadzirisa ine mamiriro uye isina mamiriro ekuderedza matambudziko e scalar mabasa ezvimwe zvakasiyana. Kuti tiratidze kuti inoshanda sei, tichada basa rakakodzera remhando dzakasiyana-siyana, dzatinozoderedza nenzira dzakasiyana.

Nezvinangwa izvi, iro Rosenbrock basa reN siyana rakakwana, iro rine fomu:

SciPy, optimization

Pasinei nokuti basa reRosenbrock uye matrices ayo eJacobi neHessian (yekutanga neyechipiri yakabva, maererano) yakatotsanangurwa mu scipy.optimize package, tichazvitsanangura isu pachedu.

import numpy as np

def rosen(x):
    """The Rosenbrock function"""
    return np.sum(100.0*(x[1:]-x[:-1]**2.0)**2.0 + (1-x[:-1])**2.0, axis=0)

Nekujeka, ngatidhirowe mu 3D kukosha kweiyo Rosenbrock basa remhando mbiri.

Drawing code

from mpl_toolkits.mplot3d import Axes3D
import matplotlib.pyplot as plt
from matplotlib import cm
from matplotlib.ticker import LinearLocator, FormatStrFormatter

# НастраиваСм 3D Π³Ρ€Π°Ρ„ΠΈΠΊ
fig = plt.figure(figsize=[15, 10])
ax = fig.gca(projection='3d')

# Π—Π°Π΄Π°Π΅ΠΌ ΡƒΠ³ΠΎΠ» ΠΎΠ±Π·ΠΎΡ€Π°
ax.view_init(45, 30)

# Π‘ΠΎΠ·Π΄Π°Π΅ΠΌ Π΄Π°Π½Π½Ρ‹Π΅ для Π³Ρ€Π°Ρ„ΠΈΠΊΠ°
X = np.arange(-2, 2, 0.1)
Y = np.arange(-1, 3, 0.1)
X, Y = np.meshgrid(X, Y)
Z = rosen(np.array([X,Y]))

# РисуСм ΠΏΠΎΠ²Π΅Ρ€Ρ…Π½ΠΎΡΡ‚ΡŒ
surf = ax.plot_surface(X, Y, Z, cmap=cm.coolwarm)
plt.show()

SciPy, optimization

Kuziva pachine nguva kuti iyo shoma ndeye 0 pa SciPy, optimization, ngatitarisei mienzaniso yekuti tingaziva sei hushoma kukosha kweRosenbrock basa uchishandisa akasiyana scipy.optimize maitiro.

Nelder-Mead simplex nzira

Ngakuve nenzvimbo yekutanga x0 mune 5-dimensional nzvimbo. Ngatitsvagei iyo diki poindi yeRosenbrock basa iri padyo nayo tichishandisa algorithm Nelder-Mead simplex (iyo algorithm inotsanangurwa sekukosha kweiyo nzira parameter):

from scipy.optimize import minimize
x0 = np.array([1.3, 0.7, 0.8, 1.9, 1.2])
res = minimize(rosen, x0, method='nelder-mead',
    options={'xtol': 1e-8, 'disp': True})
print(res.x)

Optimization terminated successfully.
         Current function value: 0.000000
         Iterations: 339
         Function evaluations: 571
[1. 1. 1. 1. 1.]

Iyo simplex nzira ndiyo yakareruka nzira yekudzikisira yakanyatsotsanangurwa uye yakatsetseka basa. Izvo hazvidi kuverengera zvinobva kune basa; zvinokwana kutsanangura chete kukosha kwayo. Iyo Nelder-Mead nzira isarudzo yakanaka kune nyore kuderedza matambudziko. Nekudaro, sezvo isingashandisi fungidziro yegradient, zvinogona kutora nguva yakareba kuti uwane iyo shoma.

Powell nzira

Imwe optimization algorithm umo chete basa kukosha kunoverengerwa ndiko nzira yaPowell. Kuti uishandise, unofanirwa kuseta nzira = 'powell' mune minim basa.

x0 = np.array([1.3, 0.7, 0.8, 1.9, 1.2])
res = minimize(rosen, x0, method='powell',
    options={'xtol': 1e-8, 'disp': True})
print(res.x)

Optimization terminated successfully.
         Current function value: 0.000000
         Iterations: 19
         Function evaluations: 1622
[1. 1. 1. 1. 1.]

Broyden-Fletcher-Goldfarb-Shanno (BFGS) algorithm

Kuti uwane nekukurumidza kusangana kune mhinduro, maitiro BFGS inoshandisa gradient yechinangwa chebasa. Iyo gradient inogona kutsanangurwa sebasa kana kuverengerwa uchishandisa yekutanga kurongeka misiyano. Chero zvazvingava, nzira yeBFGS inowanzoda mashoma ekuita mafoni pane iyo simplex nzira.

Ngatitsvagei kubva kune iyo Rosenbrock basa mune yekuongorora fomu:

SciPy, optimization

SciPy, optimization

Chirevo ichi chinoshanda kune zvakabva kune zvese zvakasiyana kunze kwekutanga nekupedzisira, izvo zvinotsanangurwa se:

SciPy, optimization

SciPy, optimization

Ngatitarisei basa rePython rinoverenga gradient iyi:

def rosen_der (x):
    xm = x [1: -1]
    xm_m1 = x [: - 2]
    xm_p1 = x [2:]
    der = np.zeros_like (x)
    der [1: -1] = 200 * (xm-xm_m1 ** 2) - 400 * (xm_p1 - xm ** 2) * xm - 2 * (1-xm)
    der [0] = -400 * x [0] * (x [1] -x [0] ** 2) - 2 * (1-x [0])
    der [-1] = 200 * (x [-1] -x [-2] ** 2)
    return der

Iyo gradient calculation function inotsanangurwa sekukosha kwejac parameter yeminim function, sezvaratidzwa pazasi.

res = minimize(rosen, x0, method='BFGS', jac=rosen_der, options={'disp': True})
print(res.x)

Optimization terminated successfully.
         Current function value: 0.000000
         Iterations: 25
         Function evaluations: 30
         Gradient evaluations: 30
[1.00000004 1.0000001  1.00000021 1.00000044 1.00000092]

Conjugate gradient algorithm (Newton)

Algorithm Newton's conjugate gradients inzira yakagadziridzwa yaNewton.
Nzira yaNewton yakavakirwa pakuenzanisa basa munzvimbo yenzvimbo nepolynomial yedhigirii rechipiri:

SciPy, optimization

apo SciPy, optimization ndiyo matrix yezvinyorwa zvechipiri (Hessian matrix, Hessian).
Kana iyo Hessian iri positive chokwadi, saka hushoma hwenzvimbo yebasa iri hunogona kuwanikwa nekuenzanisa zero gradient ye quadratic fomu kusvika zero. Mhedzisiro ichava chirevo:

SciPy, optimization

Iyo inverse Hessian inoverengerwa uchishandisa conjugate gradient nzira. Muenzaniso wekushandisa nzira iyi kuderedza basa reRosenbrock unopiwa pasi apa. Kuti ushandise nzira yeNewton-CG, unofanirwa kutsanangura basa rinoverenga Hessian.
Iyo Hessian yeRosenbrock basa mune yekuongorora fomu yakaenzana ne:

SciPy, optimization

SciPy, optimization

apo SciPy, optimization ΠΈ SciPy, optimization, tsanangura matrix SciPy, optimization.

Izvo zvakasara zvisiri-zero zvinhu zvematrix zvakaenzana ne:

SciPy, optimization

SciPy, optimization

SciPy, optimization

SciPy, optimization

Semuenzaniso, munzvimbo ye5-dimensional N = XNUMX, iyo Hessian matrix yeRosenbrock basa ine chimiro chebhendi:

SciPy, optimization

Kodhi inoverenga iyi Hessian pamwe nekodhi yekudzikisa iyo Rosenbrock basa uchishandisa conjugate gradient (Newton) nzira:

def rosen_hess(x):
    x = np.asarray(x)
    H = np.diag(-400*x[:-1],1) - np.diag(400*x[:-1],-1)
    diagonal = np.zeros_like(x)
    diagonal[0] = 1200*x[0]**2-400*x[1]+2
    diagonal[-1] = 200
    diagonal[1:-1] = 202 + 1200*x[1:-1]**2 - 400*x[2:]
    H = H + np.diag(diagonal)
    return H

res = minimize(rosen, x0, method='Newton-CG', 
               jac=rosen_der, hess=rosen_hess,
               options={'xtol': 1e-8, 'disp': True})
print(res.x)

Optimization terminated successfully.
         Current function value: 0.000000
         Iterations: 24
         Function evaluations: 33
         Gradient evaluations: 56
         Hessian evaluations: 24
[1.         1.         1.         0.99999999 0.99999999]

Muenzaniso ine tsananguro yechigadzirwa basa reHessian uye inopokana vector

Mumatambudziko epasirese, komputa uye kuchengetedza yese Hessian matrix inogona kuda yakakosha nguva uye zviwanikwa zvekurangarira. Muchiitiko ichi, hapana chikonzero chekutsanangura iyo Hessian matrix pachayo, nekuti iyo nzira yekudzikisa inoda chete vheji yakaenzana neyakagadzirwa yeHessian neimwe inopokana vector. Saka, kubva pakuona computational, zviri nani kutsanangura nekukurumidza basa rinodzosa mhedzisiro yechigadzirwa cheHessian ine arbitrary vector.

Funga nezvebasa rehess, iro rinotora iyo minimization vector senharo yekutanga, uye inopokana vheta senharo yechipiri (pamwe nedzimwe nharo dzebasa kuti rideredzwe). Muchiitiko chedu, kuverenga chigadzirwa cheHessian chebasa reRosenbrock nevector inopikisa haisi yakaoma zvikuru. Kana p is an abtural vector, ipapo chigadzirwa SciPy, optimization zvinoita se:

SciPy, optimization

Iro basa rinoverengera chigadzirwa cheHessian uye chinopokana vector chinopfuudzwa sekukosha kwehessp nharo kune kuderedza basa:

def rosen_hess_p(x, p):
    x = np.asarray(x)
    Hp = np.zeros_like(x)
    Hp[0] = (1200*x[0]**2 - 400*x[1] + 2)*p[0] - 400*x[0]*p[1]
    Hp[1:-1] = -400*x[:-2]*p[:-2]+(202+1200*x[1:-1]**2-400*x[2:])*p[1:-1] 
    -400*x[1:-1]*p[2:]
    Hp[-1] = -400*x[-2]*p[-2] + 200*p[-1]
    return Hp

res = minimize(rosen, x0, method='Newton-CG',
               jac=rosen_der, hessp=rosen_hess_p,
               options={'xtol': 1e-8, 'disp': True})

Optimization terminated successfully.
         Current function value: 0.000000
         Iterations: 24
         Function evaluations: 33
         Gradient evaluations: 56
         Hessian evaluations: 66

Conjugate gradient trust region algorithm (Newton)

Kusagadzikana kweiyo Hessian matrix uye isiriyo nzira yekutsvaga inogona kukonzera Newton's conjugate gradient algorithm kusashanda. Mumamiriro ezvinhu akadaro, sarudzo inopiwa kuvimba nharaunda nzira (trust-region) conjugate Newton gradients.

Muenzaniso netsanangudzo yeHessian matrix:

res = minimize(rosen, x0, method='trust-ncg',
               jac=rosen_der, hess=rosen_hess,
               options={'gtol': 1e-8, 'disp': True})
print(res.x)

Optimization terminated successfully.
         Current function value: 0.000000
         Iterations: 20
         Function evaluations: 21
         Gradient evaluations: 20
         Hessian evaluations: 19
[1. 1. 1. 1. 1.]

Muenzaniso nebasa rechigadzirwa cheHessian uye inopokana vector:

res = minimize(rosen, x0, method='trust-ncg', 
                jac=rosen_der, hessp=rosen_hess_p, 
                options={'gtol': 1e-8, 'disp': True})
print(res.x)

Optimization terminated successfully.
         Current function value: 0.000000
         Iterations: 20
         Function evaluations: 21
         Gradient evaluations: 20
         Hessian evaluations: 0
[1. 1. 1. 1. 1.]

Krylov mhando nzira

Kufanana nenzira yekuvimba-ncg, nzira dzeKrylov-mhando dzakanyatsokodzera kugadzirisa matambudziko makuru nekuti dzinoshandisa chete matrix-vector zvigadzirwa. Chinhu chavo ndechekugadzirisa dambudziko munharaunda yekuvimba inoganhurwa neiyo truncated Krylov subspace. Nokuda kwezvinetso zvisina chokwadi, zviri nani kushandisa nzira iyi, sezvo inoshandisa nhamba shomanana yezvisingaenzaniswi nekuda kwenhamba shoma yezvigadzirwa zvematrix-vector per subproblem, zvichienzaniswa ne-trust-ncg nzira. Mukuwedzera, mhinduro kune quadratic subproblem inowanikwa zvakanyatsonaka pane kushandisa nzira yekutenda-ncg.
Muenzaniso netsanangudzo yeHessian matrix:

res = minimize(rosen, x0, method='trust-krylov',
               jac=rosen_der, hess=rosen_hess,
               options={'gtol': 1e-8, 'disp': True})

Optimization terminated successfully.
         Current function value: 0.000000
         Iterations: 19
         Function evaluations: 20
         Gradient evaluations: 20
         Hessian evaluations: 18

print(res.x)

    [1. 1. 1. 1. 1.]

Muenzaniso nebasa rechigadzirwa cheHessian uye inopokana vector:

res = minimize(rosen, x0, method='trust-krylov',
               jac=rosen_der, hessp=rosen_hess_p,
               options={'gtol': 1e-8, 'disp': True})

Optimization terminated successfully.
         Current function value: 0.000000
         Iterations: 19
         Function evaluations: 20
         Gradient evaluations: 20
         Hessian evaluations: 0

print(res.x)

    [1. 1. 1. 1. 1.]

Algorithm yeanenge mhinduro munharaunda yekuvimba

Nzira dzose (Newton-CG, trust-ncg uye trust-krylov) dzakanyatsokodzera kugadzirisa matambudziko makuru (nezviuru zvezvinhu zvakasiyana-siyana). Izvi zvinokonzerwa nekuti iyo yepasi conjugate gradient algorithm inoreva fungidziro yekufungidzira kweiyo inverse Hessian matrix. Mhinduro inowanikwa zvakare, pasina kuwedzera kwakajeka kweHessian. Sezvo iwe uchingoda kutsanangura basa rechigadzirwa cheHessian uye chinopokana vector, iyi algorithm yakanyanya kunaka pakushanda ne sparse (band diagonal) matrices. Izvi zvinopa yakaderera mari yekuyeuka uye yakakosha nguva yekuchengetedza.

Kune matambudziko epakati nepakati, mutengo wekuchengetedza uye kugadzira iyo Hessian haina kukosha. Izvi zvinoreva kuti zvinokwanisika kuwana mhinduro mune mashoma iterations, kugadzirisa subproblems yedunhu rekuvimba rakapotsa chaizvo. Kuti uite izvi, mamwe maequation asina mutsara anogadziriswazve kune yega yega quadratic subproblem. Mhinduro yakadai inowanzoda 3 kana 4 Cholesky decompositions yeHessian matrix. Nekuda kweizvozvo, iyo nzira inopindirana mune mashoma iterations uye inoda mashoma chinangwa basa kuverenga pane dzimwe dzakaitwa nzira dzekuvimba dzedunhu. Iyi algorithm inongosanganisira kusarudza iyo yakazara Hessian matrix uye haitsigire kugona kushandisa chigadzirwa basa reHessian uye inopokana vector.

Muenzaniso nekuderedzwa kweiyo Rosenbrock basa:

res = minimize(rosen, x0, method='trust-exact',
               jac=rosen_der, hess=rosen_hess,
               options={'gtol': 1e-8, 'disp': True})
res.x

Optimization terminated successfully.
         Current function value: 0.000000
         Iterations: 13
         Function evaluations: 14
         Gradient evaluations: 13
         Hessian evaluations: 14

array([1., 1., 1., 1., 1.])

Zvimwe tichagumira ipapo. Muchinyorwa chinotevera ndichaedza kutaurira zvinhu zvinonyanya kufadza pamusoro pemamiriro ekuderedza, kushandiswa kwekuderedza mukugadzirisa matambudziko ekufungidzira, kuderedza kushanda kweimwe shanduko, zvigadziriswe zvishoma, nekutsvaga midzi yehurongwa hwekuenzanisa uchishandisa scipy.optimize. package.

Source: https://docs.scipy.org/doc/scipy/reference/

Source: www.habr.com

Voeg