Introductio ad partem retis nube infrastructure

Introductio ad partem retis nube infrastructure

Nebula computatio profundius et profundius in vitam nostram penetrat et probabiliter non est unus homo, qui ne semel quidem usus est. Sed quid nubes sit et quomodo operatur, pauci sciunt etiam ad ideam. 5G iam res efficitur et infrastructura telecomata incipit a polo solutiones ad nubem solutiones movere, sicut fecit cum a solutionibus plene ferramentis ad virtualisatum "columnas" movetur.

Hodie de interiore mundo nubis infrastructurae loquemur, praesertim fundamenta retiaculae partis spectabimus.

Quid est nubes? Eadem virtualisatio - profile sententia?

Plus quam quaestio logica. Non - hoc virtualisatio non est, etsi sine ea fieri non potest. Intueamur duas definitiones:

Nubes computando (inferius ad ut Cloud) exemplar est ad praebendas aditum usoris amicabiliter ad facultates computandas distribuendas, quae explicandae et immissae postulationi cum infimis latency et minimis dispendii ad servitium provisoris.

Virtualization - haec facultas est dividendi unam entitatem physicam (exempli gratia servientis) in plures virtuales, eoque utendo opum augendis (exempli gratia, habuistis 3 servitores onerato 25-30 centesimis, postquam virtualizationem 1 servo onerato accipis. ad 80-90 percent). Naturaliter virtualisatio aliquas facultates exedens - hypervisoris pascere debes, tamen, ut praxis ostendit, ludus candelae valet. Exemplum virtualizationis specimen est VMWare, quod perfecte apparat virtualis machinis, vel exempli gratia KVM, quod placet, sed hoc est de gustu.

virtualizatione utimur sine anima, et etiam itinera ferrea iam virtualizatione utimur - exempli gratia, in novissima versione Iunonis, systema operativum quasi apparatus virtualis in supra distributionem reali temporis Linux constituitur (Wind River 9). Virtus autem non est nubes, sed nubes sine virtute esse non potest.

Virtualisation est una edificandi caudices in quibus nubes fabricatur.

Nubem faciens per plures hypervisores simpliciter in unum L2 dominium colligens, addit duos fabularum yaml fabularum pro automatice vlans per aliquod genus Ansible perscriptum et aliquid simile systematis orchestrationis super eam ponendo pro automatice creandi machinis virtualis non laborabit. Verius erit, sed Frankenstein consequens nubes non opus est, licet ultimum somnium aliis sit. Praeterea si idem Openstack accipias, essentialiter est Frankenstein, sed oh bene, nunc de illo non loqui.

Sed ex definitione supra allata non plane liquet quid actu nubes dici possit.

Ideo documentum e NIST (Institutum Nationum Signorum et Technologiae) 5 notas praecipuas praebet ut nubes infrastructuram habere debeat;

Servitio postulante prospiciens. Liberum accessum ad facultates computatrales ei partitis (ut reticulas, disci virtuales, memoriam, nucleos processus, etc.), hae facultates automatice praebendae sunt - hoc est, sine interventu a provisore servitii.

Lata promptitudo muneris. Accessus ad facultates a normas machinas provideri debet ut usus tam PCs vexillum quam tenues clientes et mobiles machinis usum permittant.

Coniunctis opibus in paludes. Resource piscinae simul pluribus clientibus facultates praebere possunt, ut clientes separatim et mutuae influentiae et aemulationis pro opibus libere possint. Retiacula etiam in stagnis comprehenduntur, quae indicat facultatem utendi imbricatis appellandi. Lacus postulare poterunt conscendere. Usus piscinarum efficit ut necessaria subsidii gradus suppeditet culpae tolerantiae et abstractionis facultatum physicarum et virtualium - recipiens servitii simpliciter provisum est cum copia opum, quam petivit (ubi hae facultates physice sita sunt, in quot. servi et virgas — ad clientem nihil refert). Nihilominus considerare oportet quod provisor harum facultatum perlucidam conservationem efficere debet.

Velox adaptatio ad diversas conditiones. Officia debent esse flexibilia - celeri provisio facultatum, eorum discrimen, addendo vel minuendo facultates ad huius instantiam, et in huius parte sentiendum esse nebulas opes infinitas esse. Ad intellegentiam facilitatem, exempli gratia, monitionem non vides partem spatii orbis tui in Apple iCloud evanuit quia ferreus coegi in calculonis destruxit et agitando destruxit. Praeterea, ex parte tua, facultates huius officii paene infinitae sunt - 2 TB - opus non est, id solvisti et accepisti. Simile exemplum dari potest cum Google.Drive vel Yandex.Disk.

Facultas mensurandi servitium provisum. Clouds systemata automatice facultates consumptas moderari et optimizare debent, et hae machinae perspicuae esse debent tum utentis tum provisoris servitii. Hoc est, semper reprehendo quot opes tu ac clientes tui consumunt.

Operae pretium est considerare quod hae necessitates maxime requiruntur ad nubem publicam, et ad nubem privatam (id est nubem deductam ad internas societatis necessitates), haec requisita leviter componi possunt. Sed adhuc faciendum est, alioquin omnia bona nubis non habebimus computando.

Cur nube opus est?

Nihilominus, quaelibet technologia nova vel exsistentia, quaelibet nova protocollum ad aliquid creatum est (bene, excepto RIP-NG, utique). Nemo protocollo eget protocollo (bene, excepto RIP-NG, scilicet). Consentaneum est Cloudem creatum esse ad aliquod genus servitutis usoris/clienti praebendum. Omnes familiares sumus cum duobus saltem servitiis nubis, exempli gratia Dropbox vel Google.Docs, et credo plerisque illis feliciter utentibus -, exempli gratia, hic articulus in Google.Docs nubis usus scriptus est. Sed officia nubis cognoscimus tantum partem facultatum nubis esse - pressius, tantum sunt servitium SaaS-typus. Ministerium nubis praebere possumus tribus modis: forma Saas, Paas vel IaaS. Quod munus quod debes ex tuis cupiditatibus et facultatibus dependet.

Intueamur singula in ordine:

Software ut Service (SaaS) exemplar est ad plenam currum servitutem clienti comparandi, exempli gratia, officium electronicum sicut Yandex.Mail or Gmail. In hoc servitio traditionis exemplar, tu, ut clientis, nihil agis, nisi utere servitiis — hoc est, non debes cogitare de statuendo servitutis, de culpa tolerantiae vel redundantia. Summa res tesseram tuam non perstringit, sed provisor huius servitutis reliquam tibi faciet. Ex parte provisoris servitii, plene reus totius servitii est - ab ferramentis servitore et exercitu operante ad rationes datorum et programmatum.

Platform quasi Service (PaaS) - cum hoc exemplo utens, servitium provisor clienti praebet cum workpiece pro servitio, exempli gratia, cum servo Web accipiamus. Ministerium provisor clienti virtuali servo praebuit (re vera copia opum, ut RAM/CPU/Storage/retium, etc.), atque etiam OS ac programmatum necessarium in hoc servo collocavit, attamen conformationem constituit. haec omnia fiunt ab ipso cliente et pro officio ministerii clientis responsa. Ministerium provisor, sicut in praecedenti casu, praestare debet ad armorum corporis fabricam, hypervisores, ipsa machina virtualis, eius retis disponibilitate, etc., sed ipsum servitium non amplius est in ambitu responsabilitatis.

Infrastructure quasi Service (IaaS) - Hic accessus iam magis iucundus est, immo, servitium provisoris clientem cum infrastructura completa virtualizata praebet - id est, nonnulla copia opum (piscina), ut CPU Cores, RAM, Networks, etc. Omnia alia usque ad cliens - quid cliens cum his opibus intra piscinam collocari vult - non est magni momenti pro elit. Utrum cliens suam vEPC creare velit vel etiam mini operatorem creare et officia communicationis praebere - haud dubie - id facere. In tali missione, minister provisor est responsabilis opum cibariorum, culpae tolerantiae et promptitudinis, necnon OS qui eas facultates enucleare permittit et eas praesto esse clienti cum facultate augendi vel minuendi quovis tempore facultates. rogatu defendit. Cliens omnes virtuales machinis ac aliis fucis se conformat per se ipsum portale et consolatorium, etiam retiacula erigens (exceptis reticulis externis).

Quid est OpenStack?

In omnibus tribus optionibus, servitium provisor indiget OS qui efficiet ut nubes infrastructuram creationis. Re quidem vera, cum SaaS plus quam una divisio respondeat totius technologiae acervus - divisio quae infrastructurae respondet - hoc est, IaaS alteri divisioni praebet, haec divisio Saas clienti praebet. OpenStack una est e nube systemata operativa quae permittit ut fasciculum virgarum, servientium et systematum repositionum in unam piscinam resource colligere, hanc piscinam communem in subpools (tenentes) divide et has facultates clientibus super retiaculis praebe.

OpenStack Est nubes systematis operandi quod permittit vos regere lacus magnas computandi facultates, repositiones datas et facultates retis, cibaria et administrata per API utens norma authenticas machinationes.

Aliis verbis, hoc est liberum inceptis programmatibus quae ad nubes creandas (et publicas et privatas) operas destinantur, id est, instrumentorum copia, quae te ministrare et commutatione armorum in unam piscinam facultatum miscere sinunt, administrare. has facultates, necessariam tolerantiae gradum praebens culpae.

In tempore scribendi hanc materiam, structura OpenStack haec similis est:
Introductio ad partem retis nube infrastructure
Imago ex openstack.org

Singularum partium quae in OpenStack comprehenduntur munus specificum fungitur. Haec architectura distributa permittit tibi ut in solutione constituto partium officiorum quae tibi opus est includas. Aliquae tamen partes sunt partes radicis et earum amotio ad perficiendam vel partialem nihili solutionis totum ducet. Haec membra plerumque distinguuntur sicut:

  • Dashboard - Web-fundatur GUI ad administrandi OpenStack muneris
  • Keystone Est officium identitatis centralised quod praebet authenticas et auctoritatis officia in aliis officiis, necnon usorum administrandi documentorum et eorum functiones.
  • neutron - servitium retis quod praebet connectivity inter interfaces variarum operarum OpenStack (including connectivity between VMs and their access to the extra mundum)
  • spodium - praebet aditum angustos repono pro virtualis machinis
  • Novae - vitae exolvuntur administratione virtualis machinis
  • glance - repositio virtualis apparatus imagines et snapshots
  • Cicero - praebet aditum ad repono object
  • Ceilometer - ministerium quod facultatem praebet colligendi telemetria et metiri praesto et consumptis opibus
  • Calor - orchestration based on templates for automatic creation and provision of resources

Index completorum omnium inceptorum et finis eorum spectari potest hic.

Unaquaeque pars OpenStack est servitium, quod munus suum exercet et praebet API ad administrandum illud munus et inter se occurrunt cum aliis servitiis systematis operantis nubem ad infrastructuram unitam creandam. Exempli gratia, Nova administratio subsidiorum computandi et API accessum ad has facultates configurandi, Glance imaginis administrationem et API administrandi praebet, Cinder scandalum repositionis et API administrandi, etc. Omnes functiones arctissimo modo coniunguntur.

Tamen, si eam respicias, omnia officia in OpenStack currentia sunt tandem quaedam machinae virtualis (vel continens) cum reticulo coniuncta. Quaeritur: Cur tam multis elementis indigemus?

Eamus per algorithmum ad creandam virtualem machinam et eam connectens cum retiaculis ac perdurantibus in Openstack.

  1. Cum petitionem ad machinam creandam creas, fiat petitio per Horizontem (Dashboard) vel petitionem per CLI, Primum quod fit auctoritas petitionis tuae in Keystone - machinam creare potes, habetne ius retis utendi, captura quota facit, etc.
  2. Keystone firmat petitionem tuam et auth signum generat in epistula responsionis quae ulterius adhibebitur. Accepto responso Keystone, petitio mittitur versus Novam (nova api).
  3. Nova-api validitatem petitionis tuae inhibet contingentes Keystone utendo signo prius genito auth
  4. Keystone authenticas facit et informationes praebet de permissionibus et restrictionibus secundum hoc signum auth.
  5. Nova-api ingressum ad novum VM in nova-database creat et rogat ut machinam ad Nova-schedularium creandum transeat.
  6. Nova-scheduler exercitum (nodi computatorii) eligit in quem VM explicabitur secundum parametri, pondera et zonas determinatas. Huius rei commentarius et VM ID scripti sunt in nova database.
  7. Deinceps, nova-schedularum notorum nova-is cum petitione ad instantiam explicandam computandum est. Nova-conputatorum nova-conductor ad informationem de parametris apparatus (nova conductor est nova elementum quod procuratorem agentis inter nova-databasi et in nova computato agit, limitans numerum petitionum ad nova-databasi ad vitanda problemata datorum datorum. deminutio constantiae onus).
  8. Nova-ductor petitas notitias e nova-database accipit et ad nova computandum transit.
  9. Deinceps nova-conputatio aspectus appellat imaginem obtinendam ID. Glace petitionem in Keystone confirmat et informationes petitas reddit.
  10. Nova-contacta neutronis ad habendum informationes de parametris retis. Similis aspectu, neutron rogatu in Keystone confirmat, postquam ingressum in datorum datorum (port identificantis etc.), petitionem creat ut portum crearet, et petita notitia ad nova computato redit.
  11. Nova-contagia contactorum cineris rogaverunt ut volumen ad machinam virtualem collocandam. Similis aspectu, cider petitionem in Keystone confirmat, rogationem creationis volumen creat, informationes petitae reddit.
  12. Nova-contacta notorum libvirt cum rogatione ut apparatus virtualis explicandi cum parametris determinatis.

Re quidem vera, simplex operatio creandi simplicem virtualem machinam convertitur in talem voraginem API inter elementa suggestuum nubis vocat. Praeterea, ut vides, etiam officia antea designata minora consistunt inter quae commercium fit. Machinae creando parva tantum pars est, quod suggestum nubilum te facere sinit - est ministerium responsabile ad conpensationem negotiationis, ministerium responsabile ad scandalum repositionis, servitium responsabile DNS, ministerium responsabile ad commeatum nudum metalla ministrantium, etc. Nubes permittit vos tractare machinis virtualibus tuis sicut grex ovium (ut opponitur virtualizationi). Si aliquid machinae tuae in ambitu virtuali accidit - illud a tergum restituis, etc., sed nubes applicationes ita aedificatae sunt ut machina virtualis tanti momenti munus non ludere - machina virtualis "moriatur" - nulla quaestio - novum unum simpliciter creavit vehiculum innititur in template et, ut aiunt, MANIPLUS pugnatoris iacturam non animadvertit. Naturaliter hoc providet machinationes orchestrationis praesentia - utens caloris templates, facile explicas munus complexum constans justorum reticulorum et machinis virtualis.

Semper memorandum est nullam esse nubem infrastructuram sine retis — quodlibet elementum uno modo vel alio correspondet cum aliis elementis per ornatum. Praeterea, nubes omnino non stat ornatum. Naturaliter, network substrata plus minusve static - nova nodi ac permutationes quotidie non adiciuntur, sed pars operies potest et inevitabiliter mutare constanter - nova retiacula adicientur vel delebuntur, novae machinis virtualis apparebunt et vetera erunt. morietur. Et sicut meministis ex definitione nubis in primo articulo datae, facultates usori ipso et cum minimo (vel melius tamen, sine) interventu e provisore servitii distribui debent. Hoc est, genus provisionis instrumentorum retis quae nunc existit in forma anterioris finis in forma rationis tuae propriae pervia per http/https et in-officium retis fectum Vasily ut tergum non est nubes, etiam si Vasisly octo manus habet.

Neutron, ut ministerium retis, praebet API ad retis partem nubis infrastructuram administrandi. Potestates et ministerium in Openstack retis administrat praebendo stratum abstractum nomine Network-as-a-Service (Naas). Hoc est, retis idem est unitas virtualis mensurabilis ac, exempli gratia, nuclei virtualis CPU vel RAM.

Sed antequam moveatur ad architecturam retis partem OpenStack, consideremus quomodo haec retis in OpenStack operatur et cur retis pars nubis magna et integra est.

Ita nos duo RED cliens VMs et duo HYD cliens VMs. Ponamus has machinas in duobus hypervisoribus collocari hoc modo:

Introductio ad partem retis nube infrastructure

In momento, haec iusta est virtualisatio 4 servientium et nihil amplius, cum tantum omnia fecerimus 4 servitores virtualise, eos in duobus corporis ministris ponens. Hactenus ne connectantur quidem ornatum.

Nubem facere, plura elementa addere oportet. Primum virtualizemus retis partem - necesse est has 4 machinas in paria coniungere, et clientes nexum L2 cupiunt. Stipes uti potes et truncum in eius directionem configurare et omnia compone utentes pontis linux vel, provectioribus utentibus, openvswitch (ad hoc postea revertemur). Sed multa retiacula esse possunt, et constanter urget L2 per pactionem non optima idea - sunt diversae Dicasteria, servitus scrinium, menses exspectationis applicationis perficiendae, septimanae fermentum - in mundo huius temporis. iam non operatur. Quod quo celerius globus intelligat, eo facilius progrediendum est. Ideo inter hypervisores reticulum L3 eligemus, per quod machinis virtualis nostrae communicabimus, et super hoc retis L3 virtualis retia L2 deaurabimus, ubi negotiatio machinarum virtualium nostrarum decurret. GRE, Genove vel VxLAN uti potes pro encapsulatione. Hunc nunc pro dolor sit amet, licet non magni momenti est.

Opus est alicubi collocare VTEP (ut spero omnes nota cum terminologia VxLAN). Cum habemus reticulum L3 a ministris directum venientem, nihil obstat quominus VTEP in ipsis servientibus poneremus, et OVS (OpenvSwitch) optimum est ad id faciendum. Quam ob rem hoc consilium cepimus;

Introductio ad partem retis nube infrastructure

Cum negotiatio inter VMs dividi debeat, portus versus machinis virtualis diversos numeros vlan habebunt. Numerus tag numerus tantum in una virtuali commutatione agit, quia cum in VxLAN encapsulationem facile removere possumus, cum VNI habebimus.

Introductio ad partem retis nube infrastructure

Nunc machinas nostras et retiacula virtualis illis sine ullis quaestionibus creare possumus.

Quid autem, si client aliam machinam, sed in alia retis habet? Indigemus evertere inter ligula. Optionem simplicem respiciemus cum fuso centralizata adhibetur - hoc est, negotiatio fusa per nodos retis specialibus dedicatis (bene, ut regula, coniunguntur cum nodis potestate, sic idem habebimus).

Nihil complicatum videtur - pontem facimus in nodi potestate, ad illum commercium impellimus et inde petimus illum ubi opus est. Sed problema est quod client RED 10.0.0.0/24 retis uti vult, et client GREEN cliente uti vult 10.0.0.0/24 ornatum. Hoc est, intervalla inscriptionis secare incipimus. Accedit clientes, nolunt alios clientes posse iter facere in retia interiora eorum, quae sensum facit. Retiaculas et clientelas notitias mercaturae separare, singulas nomina spatii singulis earum collocabimus. Spatium retis exemplar Linux retis, id est clientes in spatio nominali RED penitus seiuncti sunt a clientibus a spatio nominali GREEN (bene, vel fundere inter retiacula clientis per spatium spatii nominandi vel in fluminis onerariis instrumentis permittitur).

Id est, sequenti schemate consequimur;

Introductio ad partem retis nube infrastructure

Cuniculi L2 ab omnibus nodis computatis ad nodi imperium confluunt. node, ubi L3 interfacies pro his reticulis sita est, singula in spatio nominali solitudinis dedicato.

Sed praecipuae rei obliti sumus. Machina virtualis clienti inservire debet, id est, saltem unum externum interfacetum per quod attingi potest. Hoc est, extra mundum exire oportet. Diversae optiones hic sunt. Faciamus optio simplicissima. Unam retiacula singulis clientibus adiciamus, quae valebit in retis provisoris nec cum aliis reticulis incidamus. Retiaculi etiam secare possunt et VRFs diversos spectare a parte provisoris ornatum. Data retiacula in nomine cuiusque clientis etiam vivent. Sed adhuc per unum physicum (vel vinculum quod est magis logicum) egrediuntur ad exteriora interface. Ad negotiationem clientem separandam, negotiatio extra exitum tagged cum VLAN tag clienti partita erit.

Quam ob rem hoc schemate cepimus:

Introductio ad partem retis nube infrastructure

Rationabilis quaestio est, cur non in ipsis nodis computatis portas faciunt? Magna quaestio haec non est, praeterea si iter distributum (DVR) converteris, hoc opus erit. In hoc missione consideramus optionem simplicissimam cum porta centralizata, quae per defaltam in Openstack adhibetur. Ad munera alta oneraria, tum iter distributum et accelerationem technologiarum utentur ut SR-IOV et Passthrough, sed, ut aiunt, fabula plane diversa est. Primum cum prima parte agamus, deinde singula in eamus.

Profecto nostra ratio iam est operabilis, sed duo sunt nuances;

  • Machinae nostras aliquo modo tueri debemus, id est, filtrum in interfaciei switch pone ad clientem.
  • Fac id fieri posse pro machina virtuali quae IP oratio automatice obtineatur, ut in eam per console omni tempore et electronicam mandare non debeas.

Sit scriptor apparatus tutela incipere. Hacc iptables uti potes, cur non.

Hoc est, nunc topologia nostra paulo perplexior facta est;

Introductio ad partem retis nube infrastructure

Transeamus in. DHCP servo addere opus est. Praecipuus locus est, ut servientibus DHCP collocandis pro quolibet cliente esset nodi moderatio iam supra dicta, ubi nomina spatii locantur;

Introductio ad partem retis nube infrastructure

Sed parva quaestio est. Quid si omnia reboot et omnia informationes de inscriptionibus conductionis in DHCP evanescat. Consentaneum est machinas novas inscriptiones tradendas esse, quae minus commodae sunt. Duae hic modi sunt - vel nomina domain utere et addere DNS servo pro quolibet cliente, tunc oratio non erit nobis magni momenti (similis retis in k8s parte) - sed quaestio est cum reticulis externis, quia inscriptiones in eis etiam per DHCP edi possunt - synchronisation opus est cum DNS servientibus in suggestu nubilo et servo externo DNS, quae mea sententia non admodum flexibilis, sed satis possibilis est. Vel secunda optio metadata est utatur, hoc est, nisi informationes de machinatione electronica edita ut DHCP server scit quae electronica ad machinam accedat, si machinam electronicam iam acceperat. Secunda optio simplicior et flexibilior est, ut te permittit ut informationis notitias de curru servares. Nunc addere metadatam in diagrammate addamus:

Introductio ad partem retis nube infrastructure

Alia quaestio, quae etiam disceptare valet, una retia externa ab omnibus clientibus uti potest, cum retiacula externa, si valida sint per totum retiaculum, difficile erit - necesse est constanter collocare ac moderari destinatio harum reticulorum. Facultas utendi una retis extra figuratis omnibus clientibus perutile erit cum nubem publicam creando. Hoc faciliorem faciet machinis explicandis quia non habemus ut electronicam datorum consulam et singularem electronicam spatium eligamus pro retis extraneis cuiusque clientis. Insuper retis externam subcriptio in antecessum et tempore instituti possumus solum inscriptiones externas cum machinis clientium coniungere necesse est.

Et hic NAT in adiutorium nostrum venit - id modo posse nos facere clientibus extra mundum accedere per spatium spatii nominalis NAT translatione utente. Bene, hic parva quaestio est. Hoc bonum est si clientis servo clientem fungitur et non ut cultor - id est, initiat potius quam hospites accipit. Sed nobis erit e converso. Hoc in casu, necesse est destinationem NAT facere ut, cum commercium accipiendo, nodi imperium intelligat hoc commercium destinari ad virtualem machinam A clientis A, quod significat translationem NAT facere ab inscriptione externa, exempli gratia 100.1.1.1 .10.0.0.1, inscriptioni internae 100. In hoc casu, licet omnes clientes eodem retis utantur, interna solitudo omnino conservatur. Hoc est, dNAT et sNAT in nodi potestate facere oportet. Sive una retis cum inscriptionibus fluitantibus sive extraneis retis utatur, sive utrumque simul, pendet ex eo quod vis in nubem inducere. Inscriptiones fluitantes in schemate non adiciemus, sed retiacula externa iam antea addita relinquemus - quisque client habet reticulum externum suum (in diagrammate ut vlan 200 et XNUMX in interface externa indicantur).

Quam ob rem iucundam simul et bene cogitatam solutionem accepimus, quae quandam flexibilitatem habet, sed nondum sustinet tolerantiae machinationes.

Uno modo, unum solum nodi imperium habemus - eius defectum ad omnium systematum ruinam ducet. Ad hanc quaestionem figere, debes saltem numerum 3 nodum facere. Addamus hoc schemate:

Introductio ad partem retis nube infrastructure

Naturaliter omnes nodi synchroni sunt et cum nodi activum relinquit, alius nodi officia sua occupabit.

Proxima quaestio virtualis machinae orbis est. In momento, in ipsis hypervisoribus reponuntur, et in casu problematum hypervisoris, omnes notitias amittimus - et praesentia populationis hic non adiuvabit si non orbis sed totius ministri amittimus. Ad hoc faciendum opus est ut opera quae ante finem pro aliqua reposita aget. Quale repositio erit, non magnopere nobis interest, sed notitias nostras tueri debet ab defectione utriusque orbis et nodi, ac fortasse totius scrinii. Plures optiones hic - sunt, utique, retiacula SAN cum Fiber Channel, sed honestum sit - FC iam reliquia praeteriti temporis - analogum E1 in transportandum - ita, assentior, adhuc usus est, sed nisi ubi est omnino impossibile sine illo. Ergo non voluntarie explicandam retis FC in 2020, sciens alia optio magis interesting. Tametsi sua cuique, forte ii, qui putant FC cum omnibus suis circumscriptionibus omnibus opus esse - Non arguo, quisque suam sententiam habet. Maxime autem interesting solutionis in mea sententia adhibenda est SDS, ut Ceph.

Ceph permittit te aedificare solutionem repositionis datarum aptissimarum cum fasciculo optionum possibilium tergum, incipiendo cum codicibus cum pari annotando (analogo ad diffusionem 5 vel 6) desinendo cum plena notitia replicationis ad diversos orbes, ratione habita orbis in servientibus et servientibus in scriniis, etc.

Ad Cephum aedificare debes 3 nodis pluribus. Commercium cum repositione etiam exercebitur per retis utens scandalum, obiectum et fasciculum repositionis officia. Addamus schematis repono:

Introductio ad partem retis nube infrastructure

Nota: etiam hyperconverges nodos computandos facere potes - haec est notio plura munera in uno nodo componendi - exempli gratia, repositionis+computi - sine specialibus nodis pro ceph repositione dedicandis. Similem rationem culpae toleranti reddemus - cum SDS notitias reservabimus cum gradu reservationis specificamus. Nihilominus, nodi hyperconvergi semper sunt compromissum - cum nodi reposita non tantum aerem calefacere ut primo aspectu videtur (quia nullae sunt in eo machinæ virtualis) - CPU opes expendit in operando SDS (re vera omnia facit. replicatio et recuperatio post defectiones nodorum, orbis, etc. Hoc est, potestatem aliquam nodi computandi perdes si eam cum repositione coniungas.

Haec omnia vasa aliquo modo tractanda sunt - opus est aliquo per quod possumus machinam, modum retis, iter rectum, etc. Ad hoc faciendum, servitutem adiciamus ad nodi moderationis quae ashboardday aget. cliens huic portae per http/ http/ http://www./ , ac omnia quae indiget (bene, fere).

Quam ob rem nunc nos culpae tolerantior ratio est. Omnia elementa huius infrastructuris quodammodo administrari debent. Antea descriptum est Openstacki incepta esse certa, quorum unumquodque munus specificum praebet. Ut patet, plus satis sunt elementa quae configurari et moderari necesse est. Hodie de parte retis loquemur.

Architectura Neutron

In OpenStack, Neutron est, qui portas machinae virtualis ad retis communis L2 coniungendi pertinet, mercaturam procurandi inter VMs in diversis reticulis L2 sitam, tum externam fusione operas praebens ut NAT, Floating IP, DHCP, etc.

In altiori gradu, operatio servitutis retis (pars fundamentalis) sic describi potest.

Incipiens VM, ministerium retis:

  1. Portum pro dato VM (vel portubus) creat et de eo ministerium DHCP notificat;
  2. Nova fabrica virtualis retis creata est (via libvirt);
  3. VM connectit portu(s) in gradu 1 creato;

Satis imparium, opus Neutronis normae mechanismi notis omnibus qui in Linux - spatia, iptables, linux pontes, openvswitch, conntrack, etc.

Statim declarandum est Neutron non esse moderatorem SDN.

Neutron pluribus componentibus connexis constat;

Introductio ad partem retis nube infrastructure

Openstack-neutron server daemon est qui cum usoris petitionibus operatur per API. Hic daemon non implicat in ullis retis nexibus perscriptum, sed necessarias notitias praebet ad hoc ad eius plugins, quae tunc elementum retis desideratum configurant. Agentes Neutron de Nodis OpenStack subcriptio cum servo Neutron.

Neutron-servo applicatum pythone scriptum est, duabus partibus constans;

  • Reliquum ministerium
  • Neutron Plugin (core/service)

CETERA servitus ordinatur ad recipiendum API vocat ab aliis componentibus (exempli gratia, petens aliquas informationes praebere, etc.)

Plugins sunt obturaculum-in componentibus programmatibus / modulorum qui per API petitiones vocantur - id est, per eos occurrit attributio servitii. Plugins in duo genera dividuntur, usus et radix. Pro regula, plugin equus maxime responsabilis est ad spatium inscriptionis et L2 nexus inter VMs administrandi, et opera plugins iam accessionem functionis sicut VPN vel FW praebent.

Enumeratio plugins in promptu hodie exempli causa videri potest hic

Plures operae esse possunt plugins, sed unus tantum plugin equus.

openstack-neutron-ml2 vexillum Openstack est radix plugin. Hoc plugin architectura modulari (dissimilis praedecessoris sui) habet et servitium retis per rectores ei connexum conformat. Plugin ipsum paulo post intuebimur, cum re vera dat flexibilitatem quam OpenStack in retiaculis habet. Radix plugin restitui potest (exempli gratia, Contrail Networking talis substitutio).

RPC officium (rabbitmq-servo) - ministerium quod procuratio queue et commercium cum aliis OpenStack servitiis praebet, necnon commercium inter agentia network servitii.

Retiacula agentia - agentium qui in unaquaque nodi locantur, per quem retis officia configurantur.

Plura sunt genera agentium.

Principale agens est L2 agentis. Hi agentes currunt in singulis hypervisoribus, inclusis nodi moderandis (propressius, in omnibus nodi, qui aliquem servitutem tenentibus praebent) eorumque praecipuum munus est machinis virtualis ad retis communis L2 coniungere, ac etiam in summis naturis generare cum eventa quaecunque eveniunt. exempli gratia disable/enable portum).

Postero non minoris momenti agentis est L3 agentis. Defalta, hoc agens solum in nodi retis (saepe nodi retis cum nodi coniungitur) fugationem praebet et inter retiacula tenentis (utraque inter reticulas et retia aliorum tenentium, et pervia ad extra mundum, providens NAT, itemque DHCP religio). Nihilominus, cum usus DVR (iter itineris distributus) necessitatem L3 plugin etiam in nodis computatis apparet.

Procurator L3 utitur spatiis nominalibus Linux ad providendum unumquemque tenentem cum statuto retiacula solitaria sui et functionis virtualis itineris qui negotiationis itineris et officia portae praebent pro Strato 2 retiacula.

Database - database of identificatores reticulorum, subnetuum, portuum, stagnorum, etc.

Enimvero Neutron petitiones API e creatione quavis retis acceptat, rogatu ratum reddit, et per RPC (si accessum aliquod plugin vel agens) vel REST API (si in SDN communicat) agentibus transmittit. instructiones necessarias ad ministerium rogatus ordinare .

Nunc ad institutionem temptandam (quomodo explicatur et quid in eo contineatur, postea in practica parte videbimus) et ubi quaeque pars sita sit videbimus:

(overcloud) [stack@undercloud ~]$ openstack network agent list  
+--------------------------------------+--------------------+-------------------------------------+-------------------+-------+-------+---------------------------+
| ID                                   | Agent Type         | Host                                | Availability Zone | Alive | State | Binary                    |
+--------------------------------------+--------------------+-------------------------------------+-------------------+-------+-------+---------------------------+
| 10495de9-ba4b-41fe-b30a-b90ec3f8728b | Open vSwitch agent | overcloud-novacompute-1.localdomain | None              | :-)   | UP    | neutron-openvswitch-agent |
| 1515ad4a-5972-46c3-af5f-e5446dff7ac7 | L3 agent           | overcloud-controller-0.localdomain  | nova              | :-)   | UP    | neutron-l3-agent          |
| 322e62ca-1e5a-479e-9a96-4f26d09abdd7 | DHCP agent         | overcloud-controller-0.localdomain  | nova              | :-)   | UP    | neutron-dhcp-agent        |
| 9c1de2f9-bac5-400e-998d-4360f04fc533 | Open vSwitch agent | overcloud-novacompute-0.localdomain | None              | :-)   | UP    | neutron-openvswitch-agent |
| d99c5657-851e-4d3c-bef6-f1e3bb1acfb0 | Open vSwitch agent | overcloud-controller-0.localdomain  | None              | :-)   | UP    | neutron-openvswitch-agent |
| ff85fae6-5543-45fb-a301-19c57b62d836 | Metadata agent     | overcloud-controller-0.localdomain  | None              | :-)   | UP    | neutron-metadata-agent    |
+--------------------------------------+--------------------+-------------------------------------+-------------------+-------+-------+---------------------------+
(overcloud) [stack@undercloud ~]$ 

Introductio ad partem retis nube infrastructure

Profecto hoc totum Neutronis structura est. Nunc pretium in ML2 plugin aliquo tempore commoratus est.

Modular Layer 2

Ut supra, plugin vexillum est plugin radix OpenStack et architectura modularis habet.

Decessor ML2 plugin structuram monolithicam habuit, quae non permisit, exempli gratia, mixto plurium technologiarum in una institutione. Exempli gratia, tam openvswitch quam linuxbridge non potuisti uti eodem tempore vel primo vel secundo. Quam ob rem cum architectura ML2 plugin creata est.

ML2 Duo membra habet - duo genera aurigarum: Typus aurigae et mecanismi agentes.

Typus regentibus traderent technologias determinant quae ad nexus retis componendos adhibebuntur, exempli gratia VxLAN, VLAN, GRE. Eodem tempore auriga usum technologiarum variarum permittit. Vexilla technicae artis est VxLAN encapsulation pro retiacula obaurandis et retiacula externa vlan.

Hoc genus coegi includit network genera:

flat - network sine tagging
VLANs - tagged network
Local - speciale genus retis pro omnibus in unum officinae (ut opus sint institutiones vel ad developers vel ad exercitium)
GRE - deaurabis network utens GRE cuniculis
VxLAN - deaurabis network utens VxLAN cuniculis

Mechanismum regentibus traderent instrumenta definiunt quae ordinationem technologiarum in typum exactoris determinati curant, exempli gratia, openvswitch, sr-iov, lucis apertae, OVN, etc.

Secundum exsecutionem huius exactoris, vel procuratores Neutron regentes adhibebuntur, vel hospites ad moderatorem externum SDN adhibebuntur, qui curat omnes quaestiones ad retiacula L2 ordinanda pertinentia, routing, etc.

Exempli gratia: si una cum OVS utamur ML2, tunc agens L2 in nodi computando inauguratus est, qui OVS procurat. Attamen, si utimur, verbi gratia OVN vel OpenDayLight, imperium OVS sub ditionem suam venit - Neutron, per plugin radicem, imperat moderatori, et iam facit quod dictum est.

Lets peniculus in Open vSwitch

In momento, una ex elementis clavis OpenStack est Open vSwitch.
Cum OpenStack inaugurari sine ullo addito venditore SDN ut Juniperus Contrail vel Nokia Nuage, OVS est principale reticulum retis nubis et, una cum iptables, conntrack, spatiis nominalibus, permittit te ad retiacula obductis multi- tentatis plenae flexae ordinare. Naturaliter haec pars restitui potest, exempli gratia, cum solutiones tertiae partis proprietatis SDN.

OVS fons apertus est commutatio software quae usui in ambitu virtuali disposita est tamquam mercaturae virtualis adiutrix.

In momento, OVS valde decentem habet functionem, quae includit technologias ut QoS, LACP, VLAN, VxLAN, GENEVE, OpenFlow, DPDK, etc.

Nota: OVS initio non conceptus est ut mollis commutatio pro functionibus telecom- pressis et magis designata est ad munera minora exigendi IT sicut minister VUL servientis vel mail. Nihilominus, OVS ulterius evoluta est et exsecutiones venae OVS valde emendaverunt effectum et facultates suas, quae sinit utendum esse ab operariis telecom- bustis muneribus maximis, exempli gratia, exsecutio OVS cum auxilio accelerationis DPDK.

Tria principalia OVS momenti sunt quae conscius esse debes:

  • Kernel modulus - componentia in spatio nucleo posita, quod processus negotiationis fundatur in regulis ab elementi potestate acceptis;
  • vSwitch daemon (ovs-vswitchd) est processus in spatio usoris immissus qui moduli nuclei programmandi responsabilis est - hoc est, directe repraesentat logicam operationis switch.
  • Database server - datorum localium in singulis hospitibus currentibus OVS sito, in quo figuratio reponitur. Moderatores SDN per hunc modulum communicare possunt protocollo OVSDB utentes.

Totum hoc comitatur cum copia diagnostica et administratione utilitatum, ut ovs-vsctl, ovs-appctl, ovs-ofctl, etc.

In statu, Openstack late a operariis telecomatis utitur ad munera retis migrandi ad eam, ut EPC, SBC, HLR, etc. Quaedam functiones sine problematibus cum OVS vivere possunt ut est, sed exempli gratia, EPC processuum subscribens negotiatio - deinde transit per ingens moles negotiationis (nunc negotiationis volumina aliquot centum gigabitis secundo perveniunt). Naturaliter, tale commercium per nucleum spatium agens (quia anterioris per defaltam ibi sita est) optima idea non est. Ideo OVS saepe in spatio usuario per DPDK accelerationis technologiam explicatur ad mercaturam NIC ad spatium usoris transmittens nucleum.

Nota: nubem ad functiones telecomas explicandas, ex nodi computandi nodi emittendis instrumentis mutandis directe possibilis est commercium outputare. SR-IOV et Passthrough ad hoc machinationes adhibentur.

Quomodo hoc opus in layout reali?

Age, nunc ad practicam partem transeamus et vide quomodo omnia in praxi operetur.

Primum explicemus institutionem simplicem Openstack. Cum experimenta ministrantium copiam non habeam, prototypum in uno servo physico e machinis virtualibus congregabimus. Ita, naturaliter, talis solutio non est idonea ad mercatores usus, sed exemplum videndi quomodo retis in Openstack operatur, talis institutionis oculis sufficit. Talis autem institutio etiam plus interest ad proposita exercenda - cum negotiationem capere potes, etc.

Cum solum praecipuam partem videre necesse est, pluribus reticulis uti non possumus, sed omnia duobus tantum reticulis uti, et secunda retiacula in hac extensione solum ad accessum ad sub nubem et ad DNS servitorem adhibebitur. Extra retiacula nunc non attingemus - haec est thema pro magno articulo separato.

Sic ab ordine committitur. Primo parva ratio. Nos instituemus Openstack utendo TripleO (Openstack on Openstack). Essentia TripleO est quod instituimus Openstack omnia in una (id est, una nodi), quae sub nube vocatur, et deinde facultates explicandi Openstack adhibemus ad instituendum Openstack ad operandum destinatum, nomine overcloud. Clouds sua insita facultate utetur ad administrandi servientes physicos (metallum nudum) - consilium ironicum - ut provideat hypervisores qui muneribus computandi, docendi, nodis repositionis funguntur. Hoc est, nulla tertiae factionis instrumenta ad explicandam Openstack adhibemus - Openstack explicamus utendo Openstack. Multo clarius fiet sicut progressio institutionis, ideo ibi non cessamus et progredimur.

Nota: In hoc articulo, propter simplicitatem, retiacula solitaria non usus sum pro reticulis internis Openstack, sed omnia in uno tantum retis explicantur. Nihilominus praesentia vel absentia network solitudo non afficit solutionis functionem fundamentalem - omnia prorsus eadem operabuntur ac cum solitudo utens, sed negotiatio in eadem retis fluet. Ad institutionem commercialem, necesse est naturaliter uti separatim uti diversis vlans et instrumentis. Exempli gratia, ceph tabularia procuratio negotiationis ac notitiae ipsius negotiationis (machina accessus ad orbes, etc.) cum separatim uteris diversis subnets (Repono procuratio et Storage) et hoc permittit tibi solutionem plus culpae tolerantem dividendo hoc commercium, e.g. , per diversos portus, vel QoS perfiles diversis negotiationis diversis utentes, ut negotiatio data non exprimat commercium significationis. In casu nostro, in eadem retis ibunt et in hoc facto nos nullo modo finiunt.

Nota: Cum virtualis machinis in virtualis environment secundum virtualis machinis imus currere, primum opus est ut virtualizationem habitemus.

Potes inspicias utrum virtualisatio nidificari possit vel non sic:


[root@hp-gen9 bormoglotx]# cat /sys/module/kvm_intel/parameters/nested
N
[root@hp-gen9 bormoglotx]# 

Si litteram N videris, tunc adiutorium pro habito virtualizatione dabimus secundum aliquem ducem quem in retiaculis inveneris, e.g. huiusmodi .

Sequentem ambitum e machinis virtualibus convenire necesse est:

Introductio ad partem retis nube infrastructure

In casu meo, connectere machinas virtuales quae pars sunt futurae institutionis (et accepi 7 ex eis, sed per 4 si non multum opum habere potes), usus sum OpenvSwitch. Unum pontem creavi et machinis virtualis ei connexis per circulos portus. Hoc ut facias, tabellam xml creavi sic:


[root@hp-gen9 ~]# virsh net-dumpxml ovs-network-1        
<network>
  <name>ovs-network-1</name>
  <uuid>7a2e7de7-fc16-4e00-b1ed-4d190133af67</uuid>
  <forward mode='bridge'/>
  <bridge name='ovs-br1'/>
  <virtualport type='openvswitch'/>
  <portgroup name='trunk-1'>
    <vlan trunk='yes'>
      <tag id='100'/>
      <tag id='101'/>
      <tag id='102'/>
    </vlan>
  </portgroup>
  <portgroup name='access-100'>
    <vlan>
      <tag id='100'/>
    </vlan>
  </portgroup>
  <portgroup name='access-101'>
    <vlan>
      <tag id='101'/>
    </vlan>
  </portgroup>
</network>

Tres portus coetus hic declarantur - accessus duo et unus truncus (hi opus erat pro servo DNS, sed hoc carere potes, vel in machinam exercitum institue - uter tibi commodius est). Deinde, hac formula utentes, nostram per virsh rete definimus:


virsh net-define ovs-network-1.xml 
virsh net-start ovs-network-1 
virsh net-autostart ovs-network-1 

Nunc hypervisorem portum figurarum emendamus;


[root@hp-gen9 ~]# cat /etc/sysconfig/network-scripts/ifcfg-ens1f0   
TYPE=Ethernet
NAME=ens1f0
DEVICE=ens1f0
TYPE=OVSPort
DEVICETYPE=ovs
OVS_BRIDGE=ovs-br1
ONBOOT=yes
OVS_OPTIONS="trunk=100,101,102"
[root@hp-gen9 ~]
[root@hp-gen9 ~]# cat /etc/sysconfig/network-scripts/ifcfg-ovs-br1 
DEVICE=ovs-br1
DEVICETYPE=ovs
TYPE=OVSBridge
BOOTPROTO=static
ONBOOT=yes
IPADDR=192.168.255.200
PREFIX=24
[root@hp-gen9 ~]# 

Nota: in hoc missione, inscriptio in portu ovs-br1 pervia non erit quia tag vlan non habet. Ad hoc figere, debes imperare sudo ovs-vsctl pone portum ovs-br1 tag=100. Nihilominus, postquam reboot, hoc tag peribit (si quis scit locum manere facere, valde gratus ero). Sed hoc non ita magni momenti est, quod hac inscriptione tantum indigebimus in institutione nec indigebimus ea cum Openstack plene explicatur.

Deinde, machinam subobscuram creamus:


virt-install  -n undercloud --description "undercloud"  --os-type=Linux  --os-variant=centos7.0  --ram=8192  --vcpus=8  --disk path=/var/lib/libvirt/images/undercloud.qcow2,bus=virtio,size=40,format=qcow2 --network network:ovs-network-1,model=virtio,portgroup=access-100 --network network:ovs-network-1,model=virtio,portgroup=access-101 --graphics none  --location /var/lib/libvirt/boot/CentOS-7-x86_64-Minimal-2003.iso --extra-args console=ttyS0

Per institutionem parametri omnes necessarios pones, ut machinae nomen, passwords, utentes, ntp servientes, etc., portus statim configurare potes, sed mihi personaliter, post institutionem, facilius est in machinam per stipes consolatorium et necessarium corrigere files. Si iam imaginem paratam habes, ea uti potes, vel ea quae egi - download minimam Centos 7 imaginem et utere ea ut VM instituas.

Post prospere institutionem, virtualem machinam habere debes in qua sub nubem instituere potes


[root@hp-gen9 bormoglotx]# virsh list
 Id    Name                           State
----------------------------------------------------
 6     dns-server                     running
 62    undercloud                     running

Primum, instrumenta ad institutionem processus necessaria instituere;

sudo yum update -y
sudo yum install -y net-tools
sudo yum install -y wget
sudo yum install -y ipmitool

Undercloud institutionem

Usorem acervum creamus, tesseram pone, eam ad sudoer adde et ei facultatem radicis imperia per sudo exsequendi sine tessera ingrediendi facultatem praebemus;


useradd stack
passwd stack

echo “stack ALL=(root) NOPASSWD:ALL” > /etc/sudoers.d/stack
chmod 0440 /etc/sudoers.d/stack

Nunc plenum nomen obnubilationis in fasciculo exercituum denotamus:


vi /etc/hosts

127.0.0.1   undercloud.openstack.rnd localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6

Deinde repositoria addimus et instituimus programmatum quibus opus est:


sudo yum install -y https://trunk.rdoproject.org/centos7/current/python2-tripleo-repos-0.0.1-0.20200409224957.8bac392.el7.noarch.rpm
sudo -E tripleo-repos -b queens current
sudo -E tripleo-repos -b queens current ceph
sudo yum install -y python-tripleoclient
sudo yum install -y ceph-ansible

Nota: si non vis ut ceph instituat, tunc mandata ceph relatas non debes intrare. Reginas liberavi ego, sed alio quovis modo uti potes.

Deinceps, effingo limam undercloud configurationem cum directorio domus utentis ACERVUS:


cp /usr/share/instack-undercloud/undercloud.conf.sample ~/undercloud.conf

Nunc opus est hunc fasciculum corrigere, ad institutionem nostram accommodare.

Has lineas ad initium tabellae addere debes:

vi undercloud.conf
[DEFAULT]
undercloud_hostname = undercloud.openstack.rnd
local_ip = 192.168.255.1/24
network_gateway = 192.168.255.1
undercloud_public_host = 192.168.255.2
undercloud_admin_host = 192.168.255.3
undercloud_nameservers = 192.168.255.253
generate_service_certificate = false
local_interface = eth0
local_mtu = 1450
network_cidr = 192.168.255.0/24
masquerade = true
masquerade_network = 192.168.255.0/24
dhcp_start = 192.168.255.11
dhcp_end = 192.168.255.50
inspection_iprange = 192.168.255.51,192.168.255.100
scheduler_max_attempts = 10

Eamus igitur per occasus;

undercloud_hostname - plenum nomen servo undercloud, ingressum in DNS servo inserere debet

local_ip - locus undercloud oratio ad network cibaria

network_gateway — eadem inscriptio localis, quae tamquam portae accessus ad extra mundum durante institutione nodis obnubilationis aget, coincidit etiam cum IP localibus.

undercloud_public_host - externum API inscriptionem liberam electronicam e retis commeatui assignato

undercloud_admin_host interna API inscriptio, quaevis inscriptio libera e retis commeatus assignata est

undercloud_nameservers - DNS server

generate_service_certificate - Haec linea magni momenti est in exemplo currenti, quia si falsum non proponis, errorem in institutione recipies, quaestio in Rubro Hat cimex venato describitur.

local_interface interface in network victualia. Hoc interfacies refigurabitur in instructione sub nubecula, ideo necesse est habere duos interfaces in nube, unum ad accessum, alterum ad commeatum.

local_mtu — MTU. Cum probationem habemus laboratorium et MTU de 1500 in portubus transitum OVS habeo, necesse est eam ad 1450 ponere ut fasciculi in VxLAN encapsulatam transire possint.

network_cidr - cibaria retiaculis

perverse - per NAT accedere ad extra network

erupit_network - retis quod nati

dhcp_start - principium inscriptionis piscinae ex qua inscriptiones nodi in obductionis instruere erunt assignatae

dhcp_end - ultima inscriptionis electronica piscinae e qua inscriptiones nodis in operto instruere debebunt

inspection_iprange - oratio necessaria ad piscinam introspectionem (ut supra piscinam non insidunt)

scheduler_max_attemps - numerus maximus conatus ut install overcloud (maior est quam vel aequalis numero lymphaticorum)

Postquam tabella descripta est, mandatum dare potes ut sub nube enucleetur;


openstack undercloud install

Ratio accipit ab 10 ad 30 minuta secundum ferrum tuum. Postremo sic debes videre output:

vi undercloud.conf
2020-08-13 23:13:12,668 INFO: 
#############################################################################
Undercloud install complete.

The file containing this installation's passwords is at
/home/stack/undercloud-passwords.conf.

There is also a stackrc file at /home/stack/stackrc.

These files are needed to interact with the OpenStack services, and should be
secured.

#############################################################################

Hoc output dicit te bene sub nube constitutum esse et iam potes statum subnubii reprimere et ire ut obumbrationem instituas.

Si ad output ifconfig spectes, novus pons interfacies apparuisse videbis

[stack@undercloud ~]$ ifconfig
br-ctlplane: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1450
        inet 192.168.255.1  netmask 255.255.255.0  broadcast 192.168.255.255
        inet6 fe80::5054:ff:fe2c:89e  prefixlen 64  scopeid 0x20<link>
        ether 52:54:00:2c:08:9e  txqueuelen 1000  (Ethernet)
        RX packets 14  bytes 1095 (1.0 KiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 20  bytes 1292 (1.2 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

Operculum instruere nunc per hoc interface peragetur.

Ex output infra videre potes quod omnia officia in uno nodo habemus:

(undercloud) [stack@undercloud ~]$ openstack host list
+--------------------------+-----------+----------+
| Host Name                | Service   | Zone     |
+--------------------------+-----------+----------+
| undercloud.openstack.rnd | conductor | internal |
| undercloud.openstack.rnd | scheduler | internal |
| undercloud.openstack.rnd | compute   | nova     |
+--------------------------+-----------+----------+

Infra figura est pars retis subnubis:


(undercloud) [stack@undercloud ~]$ python -m json.tool /etc/os-net-config/config.json 
{
    "network_config": [
        {
            "addresses": [
                {
                    "ip_netmask": "192.168.255.1/24"
                }
            ],
            "members": [
                {
                    "dns_servers": [
                        "192.168.255.253"
                    ],
                    "mtu": 1450,
                    "name": "eth0",
                    "primary": "true",
                    "type": "interface"
                }
            ],
            "mtu": 1450,
            "name": "br-ctlplane",
            "ovs_extra": [
                "br-set-external-id br-ctlplane bridge-id br-ctlplane"
            ],
            "routes": [],
            "type": "ovs_bridge"
        }
    ]
}
(undercloud) [stack@undercloud ~]$

Overcloud institutionem

In momento tantum nubem habemus, et nodos satis non habemus, e quibus obumbratio colligetur. Primum igitur ante omnia disponamus machinas virtuales quae nobis necessariae sunt. In instruere, ipsa subcinericium OS instituet ac programmata necessaria in globo machinae - hoc est, non opus est ut machinam penitus explicet, sed solum disci (vel orbis) pro eo creabit ac parametros suos determinabit - hoc est immo nudum servientem sine OS in eo constitutum accipimus.

Eamus ad folder cum disci machinis virtualis nostrae et crea discos magnitudinis debitae:


cd /var/lib/libvirt/images/
qemu-img create -f qcow2 -o preallocation=metadata control-1.qcow2 60G
qemu-img create -f qcow2 -o preallocation=metadata compute-1.qcow2 60G
qemu-img create -f qcow2 -o preallocation=metadata compute-2.qcow2 60G
qemu-img create -f qcow2 -o preallocation=metadata storage-1.qcow2 160G
qemu-img create -f qcow2 -o preallocation=metadata storage-2.qcow2 160G

Cum radicem operamur, necesse est dominum harum orbis mutare ne cum iuribus difficultatem accipiamus;


[root@hp-gen9 images]# ls -lh
total 5.8G
drwxr-xr-x. 2 qemu qemu 4.0K Aug 13 16:15 backups
-rw-r--r--. 1 root root  61G Aug 14 03:07 compute-1.qcow2
-rw-r--r--. 1 root root  61G Aug 14 03:07 compute-2.qcow2
-rw-r--r--. 1 root root  61G Aug 14 03:07 control-1.qcow2
-rw-------. 1 qemu qemu  41G Aug 14 03:03 dns-server.qcow2
-rw-r--r--. 1 root root 161G Aug 14 03:07 storage-1.qcow2
-rw-r--r--. 1 root root 161G Aug 14 03:07 storage-2.qcow2
-rw-------. 1 qemu qemu  41G Aug 14 03:07 undercloud.qcow2
[root@hp-gen9 images]# 
[root@hp-gen9 images]# 
[root@hp-gen9 images]# chown qemu:qemu /var/lib/libvirt/images/*qcow2
[root@hp-gen9 images]# ls -lh
total 5.8G
drwxr-xr-x. 2 qemu qemu 4.0K Aug 13 16:15 backups
-rw-r--r--. 1 qemu qemu  61G Aug 14 03:07 compute-1.qcow2
-rw-r--r--. 1 qemu qemu  61G Aug 14 03:07 compute-2.qcow2
-rw-r--r--. 1 qemu qemu  61G Aug 14 03:07 control-1.qcow2
-rw-------. 1 qemu qemu  41G Aug 14 03:03 dns-server.qcow2
-rw-r--r--. 1 qemu qemu 161G Aug 14 03:07 storage-1.qcow2
-rw-r--r--. 1 qemu qemu 161G Aug 14 03:07 storage-2.qcow2
-rw-------. 1 qemu qemu  41G Aug 14 03:08 undercloud.qcow2
[root@hp-gen9 images]# 

Nota: si non vis ut ceph instituere ut eam studeas, mandata non efficiunt saltem 3 nodos cum duobus saltem orbis, sed in template indicant quod virtualis orbis vda, vdb, etc adhibebitur.

Magna, nunc omnes has machinas definire oportet;


virt-install --name control-1 --ram 32768 --vcpus 8 --os-variant centos7.0 --disk path=/var/lib/libvirt/images/control-1.qcow2,device=disk,bus=virtio,format=qcow2 --noautoconsole --vnc  --network network:ovs-network-1,model=virtio,portgroup=access-100 --network network:ovs-network-1,model=virtio,portgroup=trunk-1 --dry-run --print-xml > /tmp/control-1.xml  

virt-install --name storage-1 --ram 16384 --vcpus 4 --os-variant centos7.0 --disk path=/var/lib/libvirt/images/storage-1.qcow2,device=disk,bus=virtio,format=qcow2 --noautoconsole --vnc  --network network:ovs-network-1,model=virtio,portgroup=access-100 --dry-run --print-xml > /tmp/storage-1.xml  

virt-install --name storage-2 --ram 16384 --vcpus 4 --os-variant centos7.0 --disk path=/var/lib/libvirt/images/storage-2.qcow2,device=disk,bus=virtio,format=qcow2 --noautoconsole --vnc  --network network:ovs-network-1,model=virtio,portgroup=access-100 --dry-run --print-xml > /tmp/storage-2.xml  

virt-install --name compute-1 --ram 32768 --vcpus 12 --os-variant centos7.0 --disk path=/var/lib/libvirt/images/compute-1.qcow2,device=disk,bus=virtio,format=qcow2 --noautoconsole --vnc  --network network:ovs-network-1,model=virtio,portgroup=access-100 --dry-run --print-xml > /tmp/compute-1.xml  

virt-install --name compute-2 --ram 32768 --vcpus 12 --os-variant centos7.0 --disk path=/var/lib/libvirt/images/compute-2.qcow2,device=disk,bus=virtio,format=qcow2 --noautoconsole --vnc  --network network:ovs-network-1,model=virtio,portgroup=access-100 --dry-run --print-xml > /tmp/compute-2.xml 

In fine mandatum est -print-xml > /tmp/storage-1.xml, quod tabellam xml creat cum descriptione cuiusque machinae in /tmp/ folder, si id non addideris, non eris virtualis machinis identify posse.

Nunc opus est has omnes machinas in virsh definire;


virsh define --file /tmp/control-1.xml
virsh define --file /tmp/compute-1.xml
virsh define --file /tmp/compute-2.xml
virsh define --file /tmp/storage-1.xml
virsh define --file /tmp/storage-2.xml

[root@hp-gen9 ~]# virsh list --all
 Id    Name                           State
----------------------------------------------------
 6     dns-server                     running
 64    undercloud                     running
 -     compute-1                      shut off
 -     compute-2                      shut off
 -     control-1                      shut off
 -     storage-1                      shut off
 -     storage-2                      shut off

[root@hp-gen9 ~]#

Nunc parva nuance - tripleO IPMI utitur ad ministrantes per institutionem et introspectionem administrandam.

Introspectio est processus explorationis ferramentorum ad obtinendum parametris necessarias ad ulteriorem nodorum provisionem. Introspectio exercetur per ironiam, ministerium destinatum ad operandum servientibus metallicis nudis.

Sed hic est quaestio - dum ferramenta IPMI servers portum separatum habent (vel portum commune, sed hoc non est magni momenti), tunc machinae virtuales tales portus non habent. Hic fuscus vocatur vbmc in auxilium nostrum - utilitas quae sinit vos emulare portum IPMI. Haec nuance operae pretium est attentionem praesertim iis qui talem officinam ESXI hypervisoris constituere volunt - ut honestus sit, nescio an an analogum vbmc habeat, mirum est igitur de hac re ante omnia explicanda. .

Vbmc install:


yum install yum install python2-virtualbmc

Si OS tuum sarcina invenire non potest, repositio adde:

yum install -y https://www.rdoproject.org/repos/rdo-release.rpm

Nunc utilitatem statuimus. Omnia hic vulgare probro. Nunc consentaneum est non esse servos in indice vbmc


[root@hp-gen9 ~]# vbmc list

[root@hp-gen9 ~]# 

Eos comparere oportet, sic manuales declarati sunt;


[root@hp-gen9 ~]# vbmc add control-1 --port 7001 --username admin --password admin
[root@hp-gen9 ~]# vbmc add storage-1 --port 7002 --username admin --password admin
[root@hp-gen9 ~]# vbmc add storage-2 --port 7003 --username admin --password admin
[root@hp-gen9 ~]# vbmc add compute-1 --port 7004 --username admin --password admin
[root@hp-gen9 ~]# vbmc add compute-2 --port 7005 --username admin --password admin
[root@hp-gen9 ~]#
[root@hp-gen9 ~]# vbmc list
+-------------+--------+---------+------+
| Domain name | Status | Address | Port |
+-------------+--------+---------+------+
| compute-1   | down   | ::      | 7004 |
| compute-2   | down   | ::      | 7005 |
| control-1   | down   | ::      | 7001 |
| storage-1   | down   | ::      | 7002 |
| storage-2   | down   | ::      | 7003 |
+-------------+--------+---------+------+
[root@hp-gen9 ~]#

Praeceptum syntaxin sine explicatione perspicuum esse puto. Nunc tamen omnes nostrae sessiones in USQUE statu sunt. Illis ad UP statum movendum, opus est ut eas:


[root@hp-gen9 ~]# vbmc start control-1
2020-08-14 03:15:57,826.826 13149 INFO VirtualBMC [-] Started vBMC instance for domain control-1
[root@hp-gen9 ~]# vbmc start storage-1 
2020-08-14 03:15:58,316.316 13149 INFO VirtualBMC [-] Started vBMC instance for domain storage-1
[root@hp-gen9 ~]# vbmc start storage-2
2020-08-14 03:15:58,851.851 13149 INFO VirtualBMC [-] Started vBMC instance for domain storage-2
[root@hp-gen9 ~]# vbmc start compute-1
2020-08-14 03:15:59,307.307 13149 INFO VirtualBMC [-] Started vBMC instance for domain compute-1
[root@hp-gen9 ~]# vbmc start compute-2
2020-08-14 03:15:59,712.712 13149 INFO VirtualBMC [-] Started vBMC instance for domain compute-2
[root@hp-gen9 ~]# 
[root@hp-gen9 ~]# 
[root@hp-gen9 ~]# vbmc list
+-------------+---------+---------+------+
| Domain name | Status  | Address | Port |
+-------------+---------+---------+------+
| compute-1   | running | ::      | 7004 |
| compute-2   | running | ::      | 7005 |
| control-1   | running | ::      | 7001 |
| storage-1   | running | ::      | 7002 |
| storage-2   | running | ::      | 7003 |
+-------------+---------+---------+------+
[root@hp-gen9 ~]#

Et tactus finalis - firewall regulas corrigere debes (vel omnino disable);


firewall-cmd --zone=public --add-port=7001/udp --permanent
firewall-cmd --zone=public --add-port=7002/udp --permanent
firewall-cmd --zone=public --add-port=7003/udp --permanent
firewall-cmd --zone=public --add-port=7004/udp --permanent
firewall-cmd --zone=public --add-port=7005/udp --permanent
firewall-cmd --reload

Nunc ad nubem eamus et vide omnia quae laborat. Oratio apparatus ornatus est 192.168.255.200, sub nube adiecimus sarcinam ipmitool necessariam in praeparatione ad instruere;


[stack@undercloud ~]$ ipmitool -I lanplus -U admin -P admin -H 192.168.255.200 -p 7001 power status          
Chassis Power is off
[stack@undercloud ~]$ ipmitool -I lanplus -U admin -P admin -H 192.168.255.200 -p 7001 power on
Chassis Power Control: Up/On
[stack@undercloud ~]$ 

[root@hp-gen9 ~]# virsh list 
 Id    Name                           State
----------------------------------------------------
 6     dns-server                     running
 64    undercloud                     running
 65    control-1                      running

Ut videre potes, nos feliciter gubernationem nodi per vbmc deducimus. Nunc abeamus, et proficiscamur;


[stack@undercloud ~]$ ipmitool -I lanplus -U admin -P admin -H 192.168.255.200 -p 7001 power off
Chassis Power Control: Down/Off
[stack@undercloud ~]$ ipmitool -I lanplus -U admin -P admin -H 192.168.255.200 -p 7001 power status
Chassis Power is off
[stack@undercloud ~]$ 

[root@hp-gen9 ~]# virsh list --all
 Id    Name                           State
----------------------------------------------------
 6     dns-server                     running
 64    undercloud                     running
 -     compute-1                      shut off
 -     compute-2                      shut off
 -     control-1                      shut off
 -     storage-1                      shut off
 -     storage-2                      shut off

[root@hp-gen9 ~]#

Proximus gradus est introspectio nodis in quibus obumbratio instituetur. Ad hoc faciendum opus est fasciculum json cum descriptione nodum nostrorum praeparare. Quaeso note, dissimilem institutionem a servientibus nudorum, tabella indicat portum quo vbmc pro quolibet machina currit.


[root@hp-gen9 ~]# virsh domiflist --domain control-1 
Interface  Type       Source     Model       MAC
-------------------------------------------------------
-          network    ovs-network-1 virtio      52:54:00:20:a2:2f
-          network    ovs-network-1 virtio      52:54:00:3f:87:9f

[root@hp-gen9 ~]# virsh domiflist --domain compute-1
Interface  Type       Source     Model       MAC
-------------------------------------------------------
-          network    ovs-network-1 virtio      52:54:00:98:e9:d6

[root@hp-gen9 ~]# virsh domiflist --domain compute-2
Interface  Type       Source     Model       MAC
-------------------------------------------------------
-          network    ovs-network-1 virtio      52:54:00:6a:ea:be

[root@hp-gen9 ~]# virsh domiflist --domain storage-1
Interface  Type       Source     Model       MAC
-------------------------------------------------------
-          network    ovs-network-1 virtio      52:54:00:79:0b:cb

[root@hp-gen9 ~]# virsh domiflist --domain storage-2
Interface  Type       Source     Model       MAC
-------------------------------------------------------
-          network    ovs-network-1 virtio      52:54:00:a7:fe:27

Nota: nodi imperium duo interfaces habet, sed hoc casu non interest, in hac institutione unus nobis satis erit.

Nunc lima json paramus. Portus inscriptionis papaveris indicare necesse est, per quae commeatus peragetur, parametri nodis, nomina illis nomina et quomodo ad ipmi perveniamus;


{
    "nodes":[
        {
            "mac":[
                "52:54:00:20:a2:2f"
            ],
            "cpu":"8",
            "memory":"32768",
            "disk":"60",
            "arch":"x86_64",
            "name":"control-1",
            "pm_type":"pxe_ipmitool",
            "pm_user":"admin",
            "pm_password":"admin",
            "pm_addr":"192.168.255.200",
            "pm_port":"7001"
        },
        {
            "mac":[
                "52:54:00:79:0b:cb"
            ],
            "cpu":"4",
            "memory":"16384",
            "disk":"160",
            "arch":"x86_64",
            "name":"storage-1",
            "pm_type":"pxe_ipmitool",
            "pm_user":"admin",
            "pm_password":"admin",
            "pm_addr":"192.168.255.200",
            "pm_port":"7002"
        },
        {
            "mac":[
                "52:54:00:a7:fe:27"
            ],
            "cpu":"4",
            "memory":"16384",
            "disk":"160",
            "arch":"x86_64",
            "name":"storage-2",
            "pm_type":"pxe_ipmitool",
            "pm_user":"admin",
            "pm_password":"admin",
            "pm_addr":"192.168.255.200",
            "pm_port":"7003"
        },
        {
            "mac":[
                "52:54:00:98:e9:d6"
            ],
            "cpu":"12",
            "memory":"32768",
            "disk":"60",
            "arch":"x86_64",
            "name":"compute-1",
            "pm_type":"pxe_ipmitool",
            "pm_user":"admin",
            "pm_password":"admin",
            "pm_addr":"192.168.255.200",
            "pm_port":"7004"
        },
        {
            "mac":[
                "52:54:00:6a:ea:be"
            ],
            "cpu":"12",
            "memory":"32768",
            "disk":"60",
            "arch":"x86_64",
            "name":"compute-2",
            "pm_type":"pxe_ipmitool",
            "pm_user":"admin",
            "pm_password":"admin",
            "pm_addr":"192.168.255.200",
            "pm_port":"7005"
        }
    ]
}

Nunc opus est imagines parare ad ironiam. Ad hoc fac, eas per wget et insiste;

(undercloud) [stack@undercloud ~]$ sudo wget https://images.rdoproject.org/queens/delorean/current-tripleo-rdo/overcloud-full.tar --no-check-certificate
(undercloud) [stack@undercloud ~]$ sudo wget https://images.rdoproject.org/queens/delorean/current-tripleo-rdo/ironic-python-agent.tar --no-check-certificate
(undercloud) [stack@undercloud ~]$ ls -lh
total 1.9G
-rw-r--r--. 1 stack stack 447M Aug 14 10:26 ironic-python-agent.tar
-rw-r--r--. 1 stack stack 1.5G Aug 14 10:26 overcloud-full.tar
-rw-------. 1 stack stack  916 Aug 13 23:10 stackrc
-rw-r--r--. 1 stack stack  15K Aug 13 22:50 undercloud.conf
-rw-------. 1 stack stack 2.0K Aug 13 22:50 undercloud-passwords.conf
(undercloud) [stack@undercloud ~]$ mkdir images/
(undercloud) [stack@undercloud ~]$ tar -xpvf ironic-python-agent.tar -C ~/images/
ironic-python-agent.initramfs
ironic-python-agent.kernel
(undercloud) [stack@undercloud ~]$ tar -xpvf overcloud-full.tar -C ~/images/                       
overcloud-full.qcow2
overcloud-full.initrd
overcloud-full.vmlinuz
(undercloud) [stack@undercloud ~]$ 
(undercloud) [stack@undercloud ~]$ ls -lh images/
total 1.9G
-rw-rw-r--. 1 stack stack 441M Aug 12 17:24 ironic-python-agent.initramfs
-rwxr-xr-x. 1 stack stack 6.5M Aug 12 17:24 ironic-python-agent.kernel
-rw-r--r--. 1 stack stack  53M Aug 12 17:14 overcloud-full.initrd
-rw-r--r--. 1 stack stack 1.4G Aug 12 17:18 overcloud-full.qcow2
-rwxr-xr-x. 1 stack stack 6.5M Aug 12 17:14 overcloud-full.vmlinuz
(undercloud) [stack@undercloud ~]$

Imagines ad undercloud uploading:

(undercloud) [stack@undercloud ~]$ openstack overcloud image upload --image-path ~/images/
Image "overcloud-full-vmlinuz" was uploaded.
+--------------------------------------+------------------------+-------------+---------+--------+
|                  ID                  |          Name          | Disk Format |   Size  | Status |
+--------------------------------------+------------------------+-------------+---------+--------+
| c2553770-3e0f-4750-b46b-138855b5c385 | overcloud-full-vmlinuz |     aki     | 6761064 | active |
+--------------------------------------+------------------------+-------------+---------+--------+
Image "overcloud-full-initrd" was uploaded.
+--------------------------------------+-----------------------+-------------+----------+--------+
|                  ID                  |          Name         | Disk Format |   Size   | Status |
+--------------------------------------+-----------------------+-------------+----------+--------+
| 949984e0-4932-4e71-af43-d67a38c3dc89 | overcloud-full-initrd |     ari     | 55183045 | active |
+--------------------------------------+-----------------------+-------------+----------+--------+
Image "overcloud-full" was uploaded.
+--------------------------------------+----------------+-------------+------------+--------+
|                  ID                  |      Name      | Disk Format |    Size    | Status |
+--------------------------------------+----------------+-------------+------------+--------+
| a2f2096d-c9d7-429a-b866-c7543c02a380 | overcloud-full |    qcow2    | 1487475712 | active |
+--------------------------------------+----------------+-------------+------------+--------+
Image "bm-deploy-kernel" was uploaded.
+--------------------------------------+------------------+-------------+---------+--------+
|                  ID                  |       Name       | Disk Format |   Size  | Status |
+--------------------------------------+------------------+-------------+---------+--------+
| e413aa78-e38f-404c-bbaf-93e582a8e67f | bm-deploy-kernel |     aki     | 6761064 | active |
+--------------------------------------+------------------+-------------+---------+--------+
Image "bm-deploy-ramdisk" was uploaded.
+--------------------------------------+-------------------+-------------+-----------+--------+
|                  ID                  |        Name       | Disk Format |    Size   | Status |
+--------------------------------------+-------------------+-------------+-----------+--------+
| 5cf3aba4-0e50-45d3-929f-27f025dd6ce3 | bm-deploy-ramdisk |     ari     | 461759376 | active |
+--------------------------------------+-------------------+-------------+-----------+--------+
(undercloud) [stack@undercloud ~]$

Reprehendo omnes imagines onusta


(undercloud) [stack@undercloud ~]$  openstack image list
+--------------------------------------+------------------------+--------+
| ID                                   | Name                   | Status |
+--------------------------------------+------------------------+--------+
| e413aa78-e38f-404c-bbaf-93e582a8e67f | bm-deploy-kernel       | active |
| 5cf3aba4-0e50-45d3-929f-27f025dd6ce3 | bm-deploy-ramdisk      | active |
| a2f2096d-c9d7-429a-b866-c7543c02a380 | overcloud-full         | active |
| 949984e0-4932-4e71-af43-d67a38c3dc89 | overcloud-full-initrd  | active |
| c2553770-3e0f-4750-b46b-138855b5c385 | overcloud-full-vmlinuz | active |
+--------------------------------------+------------------------+--------+
(undercloud) [stack@undercloud ~]$

Unum magis - debes addere servo DNS:


(undercloud) [stack@undercloud ~]$ openstack subnet list
+--------------------------------------+-----------------+--------------------------------------+------------------+
| ID                                   | Name            | Network                              | Subnet           |
+--------------------------------------+-----------------+--------------------------------------+------------------+
| f45dea46-4066-42aa-a3c4-6f84b8120cab | ctlplane-subnet | 6ca013dc-41c2-42d8-9d69-542afad53392 | 192.168.255.0/24 |
+--------------------------------------+-----------------+--------------------------------------+------------------+
(undercloud) [stack@undercloud ~]$ openstack subnet show f45dea46-4066-42aa-a3c4-6f84b8120cab
+-------------------+-----------------------------------------------------------+
| Field             | Value                                                     |
+-------------------+-----------------------------------------------------------+
| allocation_pools  | 192.168.255.11-192.168.255.50                             |
| cidr              | 192.168.255.0/24                                          |
| created_at        | 2020-08-13T20:10:37Z                                      |
| description       |                                                           |
| dns_nameservers   |                                                           |
| enable_dhcp       | True                                                      |
| gateway_ip        | 192.168.255.1                                             |
| host_routes       | destination='169.254.169.254/32', gateway='192.168.255.1' |
| id                | f45dea46-4066-42aa-a3c4-6f84b8120cab                      |
| ip_version        | 4                                                         |
| ipv6_address_mode | None                                                      |
| ipv6_ra_mode      | None                                                      |
| name              | ctlplane-subnet                                           |
| network_id        | 6ca013dc-41c2-42d8-9d69-542afad53392                      |
| prefix_length     | None                                                      |
| project_id        | a844ccfcdb2745b198dde3e1b28c40a3                          |
| revision_number   | 0                                                         |
| segment_id        | None                                                      |
| service_types     |                                                           |
| subnetpool_id     | None                                                      |
| tags              |                                                           |
| updated_at        | 2020-08-13T20:10:37Z                                      |
+-------------------+-----------------------------------------------------------+
(undercloud) [stack@undercloud ~]$ 
(undercloud) [stack@undercloud ~]$ neutron subnet-update f45dea46-4066-42aa-a3c4-6f84b8120cab --dns-nameserver 192.168.255.253                                    
neutron CLI is deprecated and will be removed in the future. Use openstack CLI instead.
Updated subnet: f45dea46-4066-42aa-a3c4-6f84b8120cab
(undercloud) [stack@undercloud ~]$

Nunc mandatum ad introspectionem dare possumus;

(undercloud) [stack@undercloud ~]$ openstack overcloud node import --introspect --provide inspection.json 
Started Mistral Workflow tripleo.baremetal.v1.register_or_update. Execution ID: d57456a3-d8ed-479c-9a90-dff7c752d0ec
Waiting for messages on queue 'tripleo' with no timeout.


5 node(s) successfully moved to the "manageable" state.
Successfully registered node UUID b4b2cf4a-b7ca-4095-af13-cc83be21c4f5
Successfully registered node UUID b89a72a3-6bb7-429a-93bc-48393d225838
Successfully registered node UUID 20a16cc0-e0ce-4d88-8f17-eb0ce7b4d69e
Successfully registered node UUID bfc1eb98-a17a-4a70-b0b6-6c0db0eac8e8
Successfully registered node UUID 766ab623-464c-423d-a529-d9afb69d1167
Waiting for introspection to finish...
Started Mistral Workflow tripleo.baremetal.v1.introspect. Execution ID: 6b4d08ae-94c3-4a10-ab63-7634ec198a79
Waiting for messages on queue 'tripleo' with no timeout.
Introspection of node b89a72a3-6bb7-429a-93bc-48393d225838 completed. Status:SUCCESS. Errors:None
Introspection of node 20a16cc0-e0ce-4d88-8f17-eb0ce7b4d69e completed. Status:SUCCESS. Errors:None
Introspection of node bfc1eb98-a17a-4a70-b0b6-6c0db0eac8e8 completed. Status:SUCCESS. Errors:None
Introspection of node 766ab623-464c-423d-a529-d9afb69d1167 completed. Status:SUCCESS. Errors:None
Introspection of node b4b2cf4a-b7ca-4095-af13-cc83be21c4f5 completed. Status:SUCCESS. Errors:None
Successfully introspected 5 node(s).
Started Mistral Workflow tripleo.baremetal.v1.provide. Execution ID: f5594736-edcf-4927-a8a0-2a7bf806a59a
Waiting for messages on queue 'tripleo' with no timeout.
5 node(s) successfully moved to the "available" state.
(undercloud) [stack@undercloud ~]$

Ut videre potes ex output, omnia sine erroribus perficitur. Let's check that all nodi are in the available state:


(undercloud) [stack@undercloud ~]$ openstack baremetal node list
+--------------------------------------+-----------+---------------+-------------+--------------------+-------------+
| UUID                                 | Name      | Instance UUID | Power State | Provisioning State | Maintenance |
+--------------------------------------+-----------+---------------+-------------+--------------------+-------------+
| b4b2cf4a-b7ca-4095-af13-cc83be21c4f5 | control-1 | None          | power off   | available          | False       |
| b89a72a3-6bb7-429a-93bc-48393d225838 | storage-1 | None          | power off   | available          | False       |
| 20a16cc0-e0ce-4d88-8f17-eb0ce7b4d69e | storage-2 | None          | power off   | available          | False       |
| bfc1eb98-a17a-4a70-b0b6-6c0db0eac8e8 | compute-1 | None          | power off   | available          | False       |
| 766ab623-464c-423d-a529-d9afb69d1167 | compute-2 | None          | power off   | available          | False       |
+--------------------------------------+-----------+---------------+-------------+--------------------+-------------+
(undercloud) [stack@undercloud ~]$ 

Si nodi in alio statu sunt, plerumque tractabiles sunt, aliquid erravit et debes intueri trabem et figuram quare hoc factum est. Meminerint in hac missione virtualizationem utemur et esse cimices cum usu machinis virtualis vel vbmc coniungi.

Deinde indicare necesse est uter nodi fungatur quo munere, hoc est, figuram indicabimus qua nodi explicabunt:


(undercloud) [stack@undercloud ~]$ openstack overcloud profiles list
+--------------------------------------+-----------+-----------------+-----------------+-------------------+
| Node UUID                            | Node Name | Provision State | Current Profile | Possible Profiles |
+--------------------------------------+-----------+-----------------+-----------------+-------------------+
| b4b2cf4a-b7ca-4095-af13-cc83be21c4f5 | control-1 | available       | None            |                   |
| b89a72a3-6bb7-429a-93bc-48393d225838 | storage-1 | available       | None            |                   |
| 20a16cc0-e0ce-4d88-8f17-eb0ce7b4d69e | storage-2 | available       | None            |                   |
| bfc1eb98-a17a-4a70-b0b6-6c0db0eac8e8 | compute-1 | available       | None            |                   |
| 766ab623-464c-423d-a529-d9afb69d1167 | compute-2 | available       | None            |                   |
+--------------------------------------+-----------+-----------------+-----------------+-------------------+
(undercloud) [stack@undercloud ~]$ openstack flavor list
+--------------------------------------+---------------+------+------+-----------+-------+-----------+
| ID                                   | Name          |  RAM | Disk | Ephemeral | VCPUs | Is Public |
+--------------------------------------+---------------+------+------+-----------+-------+-----------+
| 168af640-7f40-42c7-91b2-989abc5c5d8f | swift-storage | 4096 |   40 |         0 |     1 | True      |
| 52148d1b-492e-48b4-b5fc-772849dd1b78 | baremetal     | 4096 |   40 |         0 |     1 | True      |
| 56e66542-ae60-416d-863e-0cb192d01b09 | control       | 4096 |   40 |         0 |     1 | True      |
| af6796e1-d0c4-4bfe-898c-532be194f7ac | block-storage | 4096 |   40 |         0 |     1 | True      |
| e4d50fdd-0034-446b-b72c-9da19b16c2df | compute       | 4096 |   40 |         0 |     1 | True      |
| fc2e3acf-7fca-4901-9eee-4a4d6ef0265d | ceph-storage  | 4096 |   40 |         0 |     1 | True      |
+--------------------------------------+---------------+------+------+-----------+-------+-----------+
(undercloud) [stack@undercloud ~]$

Specificare profile pro sulum nodi:


openstack baremetal node set --property capabilities='profile:control,boot_option:local' b4b2cf4a-b7ca-4095-af13-cc83be21c4f5
openstack baremetal node set --property capabilities='profile:ceph-storage,boot_option:local' b89a72a3-6bb7-429a-93bc-48393d225838
openstack baremetal node set --property capabilities='profile:ceph-storage,boot_option:local' 20a16cc0-e0ce-4d88-8f17-eb0ce7b4d69e
openstack baremetal node set --property capabilities='profile:compute,boot_option:local' bfc1eb98-a17a-4a70-b0b6-6c0db0eac8e8
openstack baremetal node set --property capabilities='profile:compute,boot_option:local' 766ab623-464c-423d-a529-d9afb69d1167

Compesce nos omnia recte fecisse;


(undercloud) [stack@undercloud ~]$ openstack overcloud profiles list
+--------------------------------------+-----------+-----------------+-----------------+-------------------+
| Node UUID                            | Node Name | Provision State | Current Profile | Possible Profiles |
+--------------------------------------+-----------+-----------------+-----------------+-------------------+
| b4b2cf4a-b7ca-4095-af13-cc83be21c4f5 | control-1 | available       | control         |                   |
| b89a72a3-6bb7-429a-93bc-48393d225838 | storage-1 | available       | ceph-storage    |                   |
| 20a16cc0-e0ce-4d88-8f17-eb0ce7b4d69e | storage-2 | available       | ceph-storage    |                   |
| bfc1eb98-a17a-4a70-b0b6-6c0db0eac8e8 | compute-1 | available       | compute         |                   |
| 766ab623-464c-423d-a529-d9afb69d1167 | compute-2 | available       | compute         |                   |
+--------------------------------------+-----------+-----------------+-----------------+-------------------+
(undercloud) [stack@undercloud ~]$

Si recte omnia, praeceps explicandi lucida damus;

openstack overcloud deploy --templates --control-scale 1 --compute-scale 2  --ceph-storage-scale 2 --control-flavor control --compute-flavor compute  --ceph-storage-flavor ceph-storage --libvirt-type qemu

In reali institutione, templates customized naturaliter utendum est, in casu nostro hoc processum magnopere inpediet, quoniam singulae recensitae in template explicandum erunt. Sicut supra scriptum est, etiam simplex institutio satis erit nobis videre quomodo operatur.

Nota: --libvirt-type qemu variabilis est necessaria in hoc casu, cum virtualizationem nestram utemur. Alioquin machinis virtualis non poteris currere.

Nunc circiter horam habes, vel fortasse plus (prout capacitates ferramentorum) et sperare potes solum post hoc tempus nuntium sequentem visurum:


2020-08-14 08:39:21Z [overcloud]: CREATE_COMPLETE  Stack CREATE completed successfully

 Stack overcloud CREATE_COMPLETE 

Host 192.168.255.21 not found in /home/stack/.ssh/known_hosts
Started Mistral Workflow tripleo.deployment.v1.get_horizon_url. Execution ID: fcb996cd-6a19-482b-b755-2ca0c08069a9
Overcloud Endpoint: http://192.168.255.21:5000/
Overcloud Horizon Dashboard URL: http://192.168.255.21:80/dashboard
Overcloud rc file: /home/stack/overcloudrc
Overcloud Deployed
(undercloud) [stack@undercloud ~]$

Nunc habes versionem openstack paene plenam discumbentem, in qua studeas, experimentum, etc.

Sit scriptor reprehendo quod omnia recte operatur. In directorio domus usoris duo fasciculi sunt, una acervus (pro undercloud gerendo) et alter overcloudrc (ad obolum administrandum). Haec lima ut fons specificetur oportet, quippe quae informationes ad authenticas necessarias contineant.


(undercloud) [stack@undercloud ~]$ openstack server list
+--------------------------------------+-------------------------+--------+-------------------------+----------------+--------------+
| ID                                   | Name                    | Status | Networks                | Image          | Flavor       |
+--------------------------------------+-------------------------+--------+-------------------------+----------------+--------------+
| fd7d36f4-ce87-4b9a-93b0-add2957792de | overcloud-controller-0  | ACTIVE | ctlplane=192.168.255.15 | overcloud-full | control      |
| edc77778-8972-475e-a541-ff40eb944197 | overcloud-novacompute-1 | ACTIVE | ctlplane=192.168.255.26 | overcloud-full | compute      |
| 5448ce01-f05f-47ca-950a-ced14892c0d4 | overcloud-cephstorage-1 | ACTIVE | ctlplane=192.168.255.34 | overcloud-full | ceph-storage |
| ce6d862f-4bdf-4ba3-b711-7217915364d7 | overcloud-novacompute-0 | ACTIVE | ctlplane=192.168.255.19 | overcloud-full | compute      |
| e4507bd5-6f96-4b12-9cc0-6924709da59e | overcloud-cephstorage-0 | ACTIVE | ctlplane=192.168.255.44 | overcloud-full | ceph-storage |
+--------------------------------------+-------------------------+--------+-------------------------+----------------+--------------+
(undercloud) [stack@undercloud ~]$ 


(undercloud) [stack@undercloud ~]$ source overcloudrc 
(overcloud) [stack@undercloud ~]$ 
(overcloud) [stack@undercloud ~]$ openstack project list
+----------------------------------+---------+
| ID                               | Name    |
+----------------------------------+---------+
| 4eed7d0f06544625857d51cd77c5bd4c | admin   |
| ee1c68758bde41eaa9912c81dc67dad8 | service |
+----------------------------------+---------+
(overcloud) [stack@undercloud ~]$ 
(overcloud) [stack@undercloud ~]$ 
(overcloud) [stack@undercloud ~]$ openstack network agent list  
+--------------------------------------+--------------------+-------------------------------------+-------------------+-------+-------+---------------------------+
| ID                                   | Agent Type         | Host                                | Availability Zone | Alive | State | Binary                    |
+--------------------------------------+--------------------+-------------------------------------+-------------------+-------+-------+---------------------------+
| 10495de9-ba4b-41fe-b30a-b90ec3f8728b | Open vSwitch agent | overcloud-novacompute-1.localdomain | None              | :-)   | UP    | neutron-openvswitch-agent |
| 1515ad4a-5972-46c3-af5f-e5446dff7ac7 | L3 agent           | overcloud-controller-0.localdomain  | nova              | :-)   | UP    | neutron-l3-agent          |
| 322e62ca-1e5a-479e-9a96-4f26d09abdd7 | DHCP agent         | overcloud-controller-0.localdomain  | nova              | :-)   | UP    | neutron-dhcp-agent        |
| 9c1de2f9-bac5-400e-998d-4360f04fc533 | Open vSwitch agent | overcloud-novacompute-0.localdomain | None              | :-)   | UP    | neutron-openvswitch-agent |
| d99c5657-851e-4d3c-bef6-f1e3bb1acfb0 | Open vSwitch agent | overcloud-controller-0.localdomain  | None              | :-)   | UP    | neutron-openvswitch-agent |
| ff85fae6-5543-45fb-a301-19c57b62d836 | Metadata agent     | overcloud-controller-0.localdomain  | None              | :-)   | UP    | neutron-metadata-agent    |
+--------------------------------------+--------------------+-------------------------------------+-------------------+-------+-------+---------------------------+
(overcloud) [stack@undercloud ~]$

Institutionem meam adhuc requirit unum parvum tactum - addens iter in moderatoris, cum machina qua laboro in retis diversis est. Hoc facere, ire ad imperium-1 sub ratione caloris admin et subcriptio itineris


(undercloud) [stack@undercloud ~]$ ssh [email protected]         
Last login: Fri Aug 14 09:47:40 2020 from 192.168.255.1
[heat-admin@overcloud-controller-0 ~]$ 
[heat-admin@overcloud-controller-0 ~]$ 
[heat-admin@overcloud-controller-0 ~]$ sudo ip route add 10.169.0.0/16 via 192.168.255.254

Age, nunc in horizontem ire potes. Omnes informationes - inscriptiones, login et tesserae sunt in tabella /home/stack/overcloudrc. Tabula finalis sic spectat:

Introductio ad partem retis nube infrastructure

Obiter, in institutione nostra, inscriptiones machinae per DHCP editae sunt et, ut videre potes, eduntur "temere". In Formula stricte definire potes quod oratio adnectenda sit cui machina in instruere, si opus sit.

Quomodo fluit commercium inter machinis virtualis?

In hoc articulo tres optiones transitorias negotiationis videbimus

  • Duae machinae in unum hypervisorem in unum L2 network
  • Duae machinae in diversis hypervisors in eadem L2 retis
  • Duo machinis in diversis retiacula (crucis-retis everteret)

Causae cum accessu ad extra mundum per retis externam, utentes inscriptionibus fluitantibus, tum fuso distributo, proximo tempore deliberabimus, nunc enim negotiatio interna intendunt.

Ad reprimendam, sequenti schemate componamus:

Introductio ad partem retis nube infrastructure

4 machinas virtuales creavimus 3 in una retis L2 retiarii 1, et 1 plura in retiacula retiacula 2

(overcloud) [stack@undercloud ~]$ nova list --tenant 5e18ce8ec9594e00b155485f19895e6c             
+--------------------------------------+------+----------------------------------+--------+------------+-------------+-----------------+
| ID                                   | Name | Tenant ID                        | Status | Task State | Power State | Networks        |
+--------------------------------------+------+----------------------------------+--------+------------+-------------+-----------------+
| f53b37b5-2204-46cc-aef0-dba84bf970c0 | vm-1 | 5e18ce8ec9594e00b155485f19895e6c | ACTIVE | -          | Running     | net-1=10.0.1.85 |
| fc8b6722-0231-49b0-b2fa-041115bef34a | vm-2 | 5e18ce8ec9594e00b155485f19895e6c | ACTIVE | -          | Running     | net-1=10.0.1.88 |
| 3cd74455-b9b7-467a-abe3-bd6ff765c83c | vm-3 | 5e18ce8ec9594e00b155485f19895e6c | ACTIVE | -          | Running     | net-1=10.0.1.90 |
| 7e836338-6772-46b0-9950-f7f06dbe91a8 | vm-4 | 5e18ce8ec9594e00b155485f19895e6c | ACTIVE | -          | Running     | net-2=10.0.2.8  |
+--------------------------------------+------+----------------------------------+--------+------------+-------------+-----------------+
(overcloud) [stack@undercloud ~]$ 

Videamus quid hypervisores machinae creatae in locis creatis sint;

(overcloud) [stack@undercloud ~]$ nova show f53b37b5-2204-46cc-aef0-dba84bf970c0 | egrep "hypervisor_hostname|instance_name|hostname"
| OS-EXT-SRV-ATTR:hostname             | vm-1                                                     |
| OS-EXT-SRV-ATTR:hypervisor_hostname  | overcloud-novacompute-0.localdomain                      |
| OS-EXT-SRV-ATTR:instance_name        | instance-00000001                                        |
(overcloud) [stack@undercloud ~]$ nova show fc8b6722-0231-49b0-b2fa-041115bef34a | egrep "hypervisor_hostname|instance_name|hostname"
| OS-EXT-SRV-ATTR:hostname             | vm-2                                                     |
| OS-EXT-SRV-ATTR:hypervisor_hostname  | overcloud-novacompute-1.localdomain                      |
| OS-EXT-SRV-ATTR:instance_name        | instance-00000002                                        |
(overcloud) [stack@undercloud ~]$ nova show 3cd74455-b9b7-467a-abe3-bd6ff765c83c | egrep "hypervisor_hostname|instance_name|hostname"
| OS-EXT-SRV-ATTR:hostname             | vm-3                                                     |
| OS-EXT-SRV-ATTR:hypervisor_hostname  | overcloud-novacompute-0.localdomain                      |
| OS-EXT-SRV-ATTR:instance_name        | instance-00000003                                        |
(overcloud) [stack@undercloud ~]$ nova show 7e836338-6772-46b0-9950-f7f06dbe91a8 | egrep "hypervisor_hostname|instance_name|hostname"
| OS-EXT-SRV-ATTR:hostname             | vm-4                                                     |
| OS-EXT-SRV-ATTR:hypervisor_hostname  | overcloud-novacompute-1.localdomain                      |
| OS-EXT-SRV-ATTR:instance_name        | instance-00000004                                        |

(overcloud) [stack@undercloud~]$
Machinae vm-1 et vm-3 in computa-0 collocantur, machinae vm-2 et vm-4 in nodi computato collocantur.

Praeterea virtualis iter creatum est ut fudisset inter retiacula certa:

(overcloud) [stack@undercloud ~]$ openstack router list  --project 5e18ce8ec9594e00b155485f19895e6c
+--------------------------------------+----------+--------+-------+-------------+-------+----------------------------------+
| ID                                   | Name     | Status | State | Distributed | HA    | Project                          |
+--------------------------------------+----------+--------+-------+-------------+-------+----------------------------------+
| 0a4d2420-4b9c-46bd-aec1-86a1ef299abe | router-1 | ACTIVE | UP    | False       | False | 5e18ce8ec9594e00b155485f19895e6c |
+--------------------------------------+----------+--------+-------+-------------+-------+----------------------------------+
(overcloud) [stack@undercloud ~]$ 

Iter itineris virtuales duos habet portus, qui ut portae retiacula agunt;

(overcloud) [stack@undercloud ~]$ openstack router show 0a4d2420-4b9c-46bd-aec1-86a1ef299abe | grep interface
| interfaces_info         | [{"subnet_id": "2529ad1a-6b97-49cd-8515-cbdcbe5e3daa", "ip_address": "10.0.1.254", "port_id": "0c52b15f-8fcc-4801-bf52-7dacc72a5201"}, {"subnet_id": "335552dd-b35b-456b-9df0-5aac36a3ca13", "ip_address": "10.0.2.254", "port_id": "92fa49b5-5406-499f-ab8d-ddf28cc1a76c"}] |
(overcloud) [stack@undercloud ~]$ 

Sed antequam videamus quomodo negotiatio fluat, inspiciamus quid nunc in nodi potestate (quae etiam nodi retis est) et nodi computandi inspiciamus. Sit scriptor computa nodi committitur.


[heat-admin@overcloud-novacompute-0 ~]$ sudo ovs-vsctl show
[heat-admin@overcloud-novacompute-0 ~]$ sudo sudo ovs-appctl dpif/show
system@ovs-system: hit:3 missed:3
  br-ex:
    br-ex 65534/1: (internal)
    phy-br-ex 1/none: (patch: peer=int-br-ex)
  br-int:
    br-int 65534/2: (internal)
    int-br-ex 1/none: (patch: peer=phy-br-ex)
    patch-tun 2/none: (patch: peer=patch-int)
  br-tun:
    br-tun 65534/3: (internal)
    patch-int 1/none: (patch: peer=patch-tun)
    vxlan-c0a8ff0f 3/4: (vxlan: egress_pkt_mark=0, key=flow, local_ip=192.168.255.19, remote_ip=192.168.255.15)
    vxlan-c0a8ff1a 2/4: (vxlan: egress_pkt_mark=0, key=flow, local_ip=192.168.255.19, remote_ip=192.168.255.26)
[heat-admin@overcloud-novacompute-0 ~]$

In momento, nodi tres ovs pontes habet - br-int, br-tun, br-ex. Inter eos, ut videmus, interfacies est copia. Pro facilitate intelligendi, haec omnia in diagrammate machinantur et quid eveniat videant.

Introductio ad partem retis nube infrastructure

Respicit inscriptiones ad quas VxLAN cuniculos attolluntur, videri potest unum cuniculum ad computandum elatum 1 (192.168.255.26), alterum cuniculum ad imperium 1 spectat (192.168.255.15). Sed maxime interesting res est quod br-ex corporis interfaces non habet, et si quid fluit configurantur spectes, videre potes quod hic pons negotiatio momento tantum cadere potest.


[heat-admin@overcloud-novacompute-0 ~]$ ifconfig eth0
eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1450
        inet 192.168.255.19  netmask 255.255.255.0  broadcast 192.168.255.255
        inet6 fe80::5054:ff:fe6a:eabe  prefixlen 64  scopeid 0x20<link>
        ether 52:54:00:6a:ea:be  txqueuelen 1000  (Ethernet)
        RX packets 2909669  bytes 4608201000 (4.2 GiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 1821057  bytes 349198520 (333.0 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

[heat-admin@overcloud-novacompute-0 ~]$ 

Ut ex output videre potes, oratio recta ad portum physicum datur, et non ad pontem rectum interface.


[heat-admin@overcloud-novacompute-0 ~]$  sudo ovs-appctl fdb/show br-ex
 port  VLAN  MAC                Age
[heat-admin@overcloud-novacompute-0 ~]$  sudo ovs-ofctl dump-flows br-ex
 cookie=0x9169eae8f7fe5bb2, duration=216686.864s, table=0, n_packets=303, n_bytes=26035, priority=2,in_port="phy-br-ex" actions=drop
 cookie=0x9169eae8f7fe5bb2, duration=216686.887s, table=0, n_packets=0, n_bytes=0, priority=0 actions=NORMAL
[heat-admin@overcloud-novacompute-0 ~]$ 

Secundum primam regulam, omne quod ex phy-exportu venit, abiiciendum est.
Revera, nusquam alibi nunc est negotiatio in hunc pontem venire nisi ab hoc interface (interface cum br-int), et judicans guttas, BUM negotiatio iam in pontem involavit.

Id est, negotiatio hanc nodi relinquere potest nisi per cuniculum VxLAN et nihil aliud. Sed, si in DVR verteris, res mutabit, sed alio tempore agemus. Cum solitudo retiacula utens, exempli gratia vlans utens, non unum L3 interfacies in vlan 0 habebis, sed plures interfaces. Nihilominus, VxLAN negotiatio nodum eodem modo relinquet, sed etiam in aliquo genere Vlan dedicatum encapsulavit.

Nodum computum digessi, transeamus ad nodi imperium.


[heat-admin@overcloud-controller-0 ~]$ sudo ovs-appctl dpif/show
system@ovs-system: hit:930491 missed:825
  br-ex:
    br-ex 65534/1: (internal)
    eth0 1/2: (system)
    phy-br-ex 2/none: (patch: peer=int-br-ex)
  br-int:
    br-int 65534/3: (internal)
    int-br-ex 1/none: (patch: peer=phy-br-ex)
    patch-tun 2/none: (patch: peer=patch-int)
  br-tun:
    br-tun 65534/4: (internal)
    patch-int 1/none: (patch: peer=patch-tun)
    vxlan-c0a8ff13 3/5: (vxlan: egress_pkt_mark=0, key=flow, local_ip=192.168.255.15, remote_ip=192.168.255.19)
    vxlan-c0a8ff1a 2/5: (vxlan: egress_pkt_mark=0, key=flow, local_ip=192.168.255.15, remote_ip=192.168.255.26)
[heat-admin@overcloud-controller-0 ~]$

Re quidem vera omnia eadem esse possumus, sed IP oratio non iam in corpore corporis sed in ponte virtuali est. Hoc fit, quia hic portus est portus per quem negotiatio extra mundum exit.


[heat-admin@overcloud-controller-0 ~]$ ifconfig br-ex
br-ex: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1450
        inet 192.168.255.15  netmask 255.255.255.0  broadcast 192.168.255.255
        inet6 fe80::5054:ff:fe20:a22f  prefixlen 64  scopeid 0x20<link>
        ether 52:54:00:20:a2:2f  txqueuelen 1000  (Ethernet)
        RX packets 803859  bytes 1732616116 (1.6 GiB)
        RX errors 0  dropped 63  overruns 0  frame 0
        TX packets 808475  bytes 121652156 (116.0 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

[heat-admin@overcloud-controller-0 ~]$ 
[heat-admin@overcloud-controller-0 ~]$ sudo ovs-appctl fdb/show br-ex
 port  VLAN  MAC                Age
    3   100  28:c0:da:00:4d:d3   35
    1     0  28:c0:da:00:4d:d3   35
    1     0  52:54:00:98:e9:d6    0
LOCAL     0  52:54:00:20:a2:2f    0
    1     0  52:54:00:2c:08:9e    0
    3   100  52:54:00:20:a2:2f    0
    1     0  52:54:00:6a:ea:be    0
[heat-admin@overcloud-controller-0 ~]$ 

Portus hic ad pontem ex ponte ligatur et cum nullae vlan tags in eo sunt, portus est truncus portus cui omnes vlans permittuntur, nunc negotiatio extra sine tag egreditur, ut in vlan-id 0 indicatum est. output supra.

Introductio ad partem retis nube infrastructure

Cetera omnia momento nodi computato similia sunt, pontes eosdem, et eaedem cuniculos euntes ad duos nodos computandos.

Nodos repositorios in hoc articulo non considerabimus, sed ad intellegendum necesse est dicere partem retis horum nodis vulgare ad turpitudinem esse. In casu nostro, unus tantum est Portus corporis (eth0) cum inscriptione IP illi assignata et est. Cuniculi VxLAN nullae sunt pontes cuniculi etc. - Nulla ovs omnino est, cum punctum in eo non sit. Cum utens retis solitario, hic nodi duo interfaces (portus corporis, corporis, vel duo tantum vlans - non refert - dependet ab eo quod vis) - unum pro administratione, alterum pro negotiatione (ad VM orbis scribens. legens ex orbe, etc.)

Figuratum est quod habemus in nodis in absentia aliqua officia. Nunc 4 machinas virtualis demus et vide quomodo schema mutationes supra descriptos - portus, itinera virtualis habere debeamus, etc.

Hactenus retiacula nostra sic spectat:

Introductio ad partem retis nube infrastructure

Duas machinas virtuales habemus in unaquaque nodi computatrali. Computa-0 exemplo utens, videamus quomodo omnia contineantur.


[heat-admin@overcloud-novacompute-0 ~]$ sudo virsh list 
 Id    Name                           State
----------------------------------------------------
 1     instance-00000001              running
 3     instance-00000003              running

[heat-admin@overcloud-novacompute-0 ~]$ 

Machina unam tantum virtualem interfaciei habet - tap95d96a75-a0;

[heat-admin@overcloud-novacompute-0 ~]$ sudo virsh domiflist instance-00000001
Interface  Type       Source     Model       MAC
-------------------------------------------------------
tap95d96a75-a0 bridge     qbr95d96a75-a0 virtio      fa:16:3e:44:98:20

[heat-admin@overcloud-novacompute-0 ~]$ 

Hoc interface spectat in ponte Linux:

[heat-admin@overcloud-novacompute-0 ~]$ sudo brctl show
bridge name     bridge id               STP enabled     interfaces
docker0         8000.0242904c92a8       no
qbr5bd37136-47          8000.5e4e05841423       no              qvb5bd37136-47
                                                        tap5bd37136-47
qbr95d96a75-a0          8000.de076cb850f6       no              qvb95d96a75-a0
                                                        tap95d96a75-a0
[heat-admin@overcloud-novacompute-0 ~]$ 

Ut videre potes ex output, duo tantum interfacies in ponte - tap95d96a75-a0 et qvb95d96a75-a0 sunt.

Hic morare aliquantulum valet de rationibus retis virtualis in OpenStack:
vtap - interface virtualis apposita ad instantiam (VM)
qbr - pons Linux
qvb et qvo - vEth par ad pontem Linux et Open vSwitch ponte
br-int, br-tun, br-vlan — Open vSwitch pontibus
pat-, int-br-, phy-br- - Open vSwitch patch interfaces connectens pontibus
qg, qr, ha, fg, sg - Aperi vSwitch portus virtualis machinis uti ad coniungere ad OVS

Ut intelligas, si portum habemus qvb95d96a75-a0 in ponte, qui est par vEth, alicubi est ejus instar, qui logice dici debet qvo95d96a75-a0. Videamus quid portus in OVS sint.


[heat-admin@overcloud-novacompute-0 ~]$ sudo sudo ovs-appctl dpif/show
system@ovs-system: hit:526 missed:91
  br-ex:
    br-ex 65534/1: (internal)
    phy-br-ex 1/none: (patch: peer=int-br-ex)
  br-int:
    br-int 65534/2: (internal)
    int-br-ex 1/none: (patch: peer=phy-br-ex)
    patch-tun 2/none: (patch: peer=patch-int)
    qvo5bd37136-47 6/6: (system)
    qvo95d96a75-a0 3/5: (system)
  br-tun:
    br-tun 65534/3: (internal)
    patch-int 1/none: (patch: peer=patch-tun)
    vxlan-c0a8ff0f 3/4: (vxlan: egress_pkt_mark=0, key=flow, local_ip=192.168.255.19, remote_ip=192.168.255.15)
    vxlan-c0a8ff1a 2/4: (vxlan: egress_pkt_mark=0, key=flow, local_ip=192.168.255.19, remote_ip=192.168.255.26)
[heat-admin@overcloud-novacompute-0 ~]$ 

Portus in br-int, ut videre possumus. Br-int fungitur pactione quae virtualis machinae portuum terminatur. Praeter qvo95d96a75-a0, portus qvo5bd37136-47 in output apparet. Haec est portus secundae machinae virtualis. Quam ob rem nunc diagramma nostrum hoc modo spectat;

Introductio ad partem retis nube infrastructure

Interrogatio quae statim attente lectori intersit - quis est pons linux inter portum rectum apparatus et portum OVS? Ita est, ut machinae tuendae, coetus securitatis adhibeantur, quae nihil aliud sunt quam iptables. OVS cum iptables non laborat, sic hoc "fusum" inuentum est. Tamen obsoletum est - substituitur per conntrack in novis emissiones.

Ita demum ratio haec spectat;

Introductio ad partem retis nube infrastructure

Duae machinae in unum hypervisorem in unum L2 network

Cum haec duo VMs in eadem retis L2 et in eodem hypervisore sitae sint, commercium inter eos logice per br-int fluet, quoniam utraque machina in eodem VLAN erit:


[heat-admin@overcloud-novacompute-0 ~]$ sudo virsh domiflist instance-00000001
Interface  Type       Source     Model       MAC
-------------------------------------------------------
tap95d96a75-a0 bridge     qbr95d96a75-a0 virtio      fa:16:3e:44:98:20

[heat-admin@overcloud-novacompute-0 ~]$ 
[heat-admin@overcloud-novacompute-0 ~]$ 
[heat-admin@overcloud-novacompute-0 ~]$ sudo virsh domiflist instance-00000003
Interface  Type       Source     Model       MAC
-------------------------------------------------------
tap5bd37136-47 bridge     qbr5bd37136-47 virtio      fa:16:3e:83:ad:a4

[heat-admin@overcloud-novacompute-0 ~]$ 
[heat-admin@overcloud-novacompute-0 ~]$ sudo ovs-appctl fdb/show br-int 
 port  VLAN  MAC                Age
    6     1  fa:16:3e:83:ad:a4    0
    3     1  fa:16:3e:44:98:20    0
[heat-admin@overcloud-novacompute-0 ~]$ 

Duae machinae in diversis hypervisors in eadem L2 retis

Nunc videamus quomodo negotiatio inter duas machinas in eadem retis L2 ibit, sed in diversis hypervisoribus sita est. Ut sit honestum, nihil multum mutabit, iusta negotiatio inter hypervisores per vxlan cuniculum ibit. Intueamur exemplum.

Inscriptiones machinis virtualis inter quas negotiationem spectabimus;

[heat-admin@overcloud-novacompute-0 ~]$ sudo virsh domiflist instance-00000001
Interface  Type       Source     Model       MAC
-------------------------------------------------------
tap95d96a75-a0 bridge     qbr95d96a75-a0 virtio      fa:16:3e:44:98:20

[heat-admin@overcloud-novacompute-0 ~]$ 


[heat-admin@overcloud-novacompute-1 ~]$ sudo virsh domiflist instance-00000002
Interface  Type       Source     Model       MAC
-------------------------------------------------------
tape7e23f1b-07 bridge     qbre7e23f1b-07 virtio      fa:16:3e:72:ad:53

[heat-admin@overcloud-novacompute-1 ~]$ 

Respicimus tabulam procuret in computo-0 in br-int;

[heat-admin@overcloud-novacompute-0 ~]$  sudo ovs-appctl fdb/show br-int | grep fa:16:3e:72:ad:53
    2     1  fa:16:3e:72:ad:53    1
[heat-admin@overcloud-novacompute-0 ~]

Negotiatio ad portum eundum 2 - videamus cuiusmodi portus sit:

[heat-admin@overcloud-novacompute-0 ~]$ sudo ovs-ofctl show br-int | grep addr
 1(int-br-ex): addr:7e:7f:28:1f:bd:54
 2(patch-tun): addr:0a:bd:07:69:58:d9
 3(qvo95d96a75-a0): addr:ea:50:9a:3d:69:58
 6(qvo5bd37136-47): addr:9a:d1:03:50:3d:96
 LOCAL(br-int): addr:1a:0f:53:97:b1:49
[heat-admin@overcloud-novacompute-0 ~]$

Hoc est panni rudis-tun - hoc est interface in br-tun. Videamus quid acciderit sarcina in br-tun;

[heat-admin@overcloud-novacompute-0 ~]$ sudo ovs-ofctl dump-flows br-tun | grep fa:16:3e:72:ad:53
 cookie=0x8759a56536b67a8e, duration=1387.959s, table=20, n_packets=1460, n_bytes=138880, hard_timeout=300, idle_age=0, hard_age=0, priority=1,vlan_tci=0x0001/0x0fff,dl_dst=fa:16:3e:72:ad:53 actions=load:0->NXM_OF_VLAN_TCI[],load:0x16->NXM_NX_TUN_ID[],output:2
[heat-admin@overcloud-novacompute-0 ~]$ 

Fasciculus in VxLAN sarcinatus est et ad portum mittitur 2. Videamus quo portum 2 ducit:

[heat-admin@overcloud-novacompute-0 ~]$ sudo ovs-ofctl show br-tun | grep addr   
 1(patch-int): addr:b2:d1:f8:21:96:66
 2(vxlan-c0a8ff1a): addr:be:64:1f:75:78:a7
 3(vxlan-c0a8ff0f): addr:76:6f:b9:3c:3f:1c
 LOCAL(br-tun): addr:a2:5b:6d:4f:94:47
[heat-admin@overcloud-novacompute-0 ~]$

Haec vxlan cuniculo in computo 1 est;

[heat-admin@overcloud-novacompute-0 ~]$ sudo ovs-appctl dpif/show | egrep vxlan-c0a8ff1a
    vxlan-c0a8ff1a 2/4: (vxlan: egress_pkt_mark=0, key=flow, local_ip=192.168.255.19, remote_ip=192.168.255.26)
[heat-admin@overcloud-novacompute-0 ~]$

Eamus ad conputandum-1 et vide quid deinde cum sarcinis acciderit;

[heat-admin@overcloud-novacompute-1 ~]$ sudo ovs-appctl fdb/show br-int | egrep fa:16:3e:44:98:20
    2     1  fa:16:3e:44:98:20    1
[heat-admin@overcloud-novacompute-1 ~]$ 

Mac in tabula transmissa in computo 1 est, et ut ex output supra videri potest, per 2 portum conspicuum est, qui est portus versus br-tun;

[heat-admin@overcloud-novacompute-1 ~]$ sudo ovs-ofctl show br-int | grep addr   
 1(int-br-ex): addr:8a:d7:f9:ad:8c:1d
 2(patch-tun): addr:46:cc:40:bd:20:da
 3(qvoe7e23f1b-07): addr:12:78:2e:34:6a:c7
 4(qvo3210e8ec-c0): addr:7a:5f:59:75:40:85
 LOCAL(br-int): addr:e2:27:b2:ed:14:46

Bene ergo videmus in 1 computato inesse papaver destinatum;

[heat-admin@overcloud-novacompute-1 ~]$ sudo ovs-appctl fdb/show br-int | egrep fa:16:3e:72:ad:53
    3     1  fa:16:3e:72:ad:53    0
[heat-admin@overcloud-novacompute-1 ~]$ 

Hoc est, fasciculus receptus ad portum 3, post quem iam est virtualis machina instantiae-00000003.

Pulchritudo explicandi Openstack discendi de infrastructura virtuali est quod mercaturam inter hypervisores facile capere possumus et videre quid cum eo agatur. Hoc est quod nunc faciemus, tcpdump in vnet portum versus computa-0 currite;


[root@hp-gen9 bormoglotx]# tcpdump -vvv -i vnet3
tcpdump: listening on vnet3, link-type EN10MB (Ethernet), capture size 262144 bytes

*****************omitted*******************

04:39:04.583459 IP (tos 0x0, ttl 64, id 16868, offset 0, flags [DF], proto UDP (17), length 134)
    192.168.255.19.39096 > 192.168.255.26.4789: [no cksum] VXLAN, flags [I] (0x08), vni 22
IP (tos 0x0, ttl 64, id 8012, offset 0, flags [DF], proto ICMP (1), length 84)
    10.0.1.85 > 10.0.1.88: ICMP echo request, id 5634, seq 16, length 64
04:39:04.584449 IP (tos 0x0, ttl 64, id 35181, offset 0, flags [DF], proto UDP (17), length 134)
    192.168.255.26.speedtrace-disc > 192.168.255.19.4789: [no cksum] VXLAN, flags [I] (0x08), vni 22
IP (tos 0x0, ttl 64, id 59124, offset 0, flags [none], proto ICMP (1), length 84)
    10.0.1.88 > 10.0.1.85: ICMP echo reply, id 5634, seq 16, length 64
	
*****************omitted*******************

Prima linea ostendit Patek ab inscriptione 10.0.1.85 inscriptionem 10.0.1.88 (ICMP traffic) et fasciculum VxLAN cum vni 22 involvit et fasciculus ab hospite 192.168.255.19 (compute-0) ad exercitum accedit 192.168.255.26 .1 (conputant-XNUMX). Possumus inspicere quod VNI una in ovis definita aequet.

Ad hanc lineam redeamus actiones = onus: 0->NXM_OF_VLAN_TCI[], onus: 0x16->NXM_NX_TUN_ID[], output:2. 0x16 est vni hexadecimali numeri ratio. Convertamus hunc numerum ad 16th systema:


16 = 6*16^0+1*16^1 = 6+16 = 22

Hoc est, vni respondet rei.

Secunda linea ostendit negotiationem reddere, bene, nullus punctus est in explicando, ibi omnia manifesta sunt.

Duo in diversis retiacula machinis (inter-fuso network)

Novissimus casus hodie inter retiacula fundit in uno incepto utens virtuali itineris. Causam sine DVR consideramus (in alio articulo videbimus), sic fusa fit in nodi retis. In nobis, nodi retis non ponitur in ente separato et in nodi potestate sita est.

Primum videamus opera illa excitanda;

$ ping 10.0.2.8
PING 10.0.2.8 (10.0.2.8): 56 data bytes
64 bytes from 10.0.2.8: seq=0 ttl=63 time=7.727 ms
64 bytes from 10.0.2.8: seq=1 ttl=63 time=3.832 ms
^C
--- 10.0.2.8 ping statistics ---
2 packets transmitted, 2 packets received, 0% packet loss
round-trip min/avg/max = 3.832/5.779/7.727 ms

Cum hoc casu fasciculus ad ianuam ire debet et ibi fugari, quaerendum est MAC electronicam ianuae, cuius tabulam ARP spectamus in instantia:

$ arp
host-10-0-1-254.openstacklocal (10.0.1.254) at fa:16:3e:c4:64:70 [ether]  on eth0
host-10-0-1-1.openstacklocal (10.0.1.1) at fa:16:3e:e6:2c:5c [ether]  on eth0
host-10-0-1-90.openstacklocal (10.0.1.90) at fa:16:3e:83:ad:a4 [ether]  on eth0
host-10-0-1-88.openstacklocal (10.0.1.88) at fa:16:3e:72:ad:53 [ether]  on eth0

Nunc videamus ubi negotiatio cum destinatione (10.0.1.254) fa:16:3e:c4:64:70 mittenda sit:

[heat-admin@overcloud-novacompute-0 ~]$ sudo ovs-appctl fdb/show br-int | egrep fa:16:3e:c4:64:70
    2     1  fa:16:3e:c4:64:70    0
[heat-admin@overcloud-novacompute-0 ~]$ 

Intueamur quo portum 2 ducit:

[heat-admin@overcloud-novacompute-0 ~]$ sudo ovs-ofctl show br-int | grep addr
 1(int-br-ex): addr:7e:7f:28:1f:bd:54
 2(patch-tun): addr:0a:bd:07:69:58:d9
 3(qvo95d96a75-a0): addr:ea:50:9a:3d:69:58
 6(qvo5bd37136-47): addr:9a:d1:03:50:3d:96
 LOCAL(br-int): addr:1a:0f:53:97:b1:49
[heat-admin@overcloud-novacompute-0 ~]$ 

Omnia logica sunt, negotiatio ad br-tun accedit. Videamus uter vxlan cuniculo involvatur;

[heat-admin@overcloud-novacompute-0 ~]$ sudo ovs-ofctl dump-flows br-tun | grep fa:16:3e:c4:64:70
 cookie=0x8759a56536b67a8e, duration=3514.566s, table=20, n_packets=3368, n_bytes=317072, hard_timeout=300, idle_age=0, hard_age=0, priority=1,vlan_tci=0x0001/0x0fff,dl_dst=fa:16:3e:c4:64:70 actions=load:0->NXM_OF_VLAN_TCI[],load:0x16->NXM_NX_TUN_ID[],output:3
[heat-admin@overcloud-novacompute-0 ~]$ 

Tertius portus vxlan cuniculum eft;

[heat-admin@overcloud-controller-0 ~]$ sudo ovs-ofctl show br-tun | grep addr
 1(patch-int): addr:a2:69:00:c5:fa:ba
 2(vxlan-c0a8ff1a): addr:86:f0:ce:d0:e8:ea
 3(vxlan-c0a8ff13): addr:72:aa:73:2c:2e:5b
 LOCAL(br-tun): addr:a6:cb:cd:72:1c:45
[heat-admin@overcloud-controller-0 ~]$ 

Quod spectat ad nodi imperium;

[heat-admin@overcloud-controller-0 ~]$ sudo sudo ovs-appctl dpif/show | grep vxlan-c0a8ff1a
    vxlan-c0a8ff1a 2/5: (vxlan: egress_pkt_mark=0, key=flow, local_ip=192.168.255.15, remote_ip=192.168.255.26)
[heat-admin@overcloud-controller-0 ~]$ 

Negotiatio ad nodi imperium pervenit, ut ad eam accedamus et quomodo futurum sit fuso videbimus.

Ut meministis, nodi ditionis intus inspiciebat prorsus simile ac nodi computati - eosdem tres pontes, tantum portum physicum habuit per quem nodi negotiatores extra mittere poterant. Instantiarum creatio configurationem in nodis computatis - pontis linux, iptables et interfaces nodi additae sunt. Retiaculorum creatio et itineris virtualis etiam signum suum reliquit in configuratione nodi moderaminis.

Patet igitur, portam MAC electronicam esse debere in mensa transmittuntur in nodi potestate. Perspiciamus quod ibi sit et ubi quaerat;

[heat-admin@overcloud-controller-0 ~]$ sudo ovs-appctl fdb/show br-int | grep fa:16:3e:c4:64:70
    5     1  fa:16:3e:c4:64:70    1
[heat-admin@overcloud-controller-0 ~]$ 
[heat-admin@overcloud-controller-0 ~]$  sudo ovs-ofctl show br-int | grep addr
 1(int-br-ex): addr:2e:58:b6:db:d5:de
 2(patch-tun): addr:06:41:90:f0:9e:56
 3(tapca25a97e-64): addr:fa:16:3e:e6:2c:5c
 4(tap22015e46-0b): addr:fa:16:3e:76:c2:11
 5(qr-0c52b15f-8f): addr:fa:16:3e:c4:64:70
 6(qr-92fa49b5-54): addr:fa:16:3e:80:13:72
 LOCAL(br-int): addr:06:de:5d:ed:44:44
[heat-admin@overcloud-controller-0 ~]$ 

Mac conspicuum est portum qr-0c52b15f-8f. Si ad indicem portuum virtualis in Openstack revertimur, hoc genus portus varias virtuales cogitationes ad OVS coniungere adhibetur. More accuratius, qr est portus ad virtualem iter, quod in spatio nominali repraesentatur.

Videamus quae spatia spatii in servo sunt:

[heat-admin@overcloud-controller-0 ~]$ sudo  ip netns
qrouter-0a4d2420-4b9c-46bd-aec1-86a1ef299abe (id: 2)
qdhcp-7d541e74-1c36-4e1d-a7c4-0968c8dbc638 (id: 1)
qdhcp-67a3798c-32c0-4c18-8502-2531247e3cc2 (id: 0)
[heat-admin@overcloud-controller-0 ~]$ 

Tres quotquot codices. Sed nominibus iudicans, propositum utriusque coniicere potes. Instantias cum ID 0 et 1 postea reddemus, nunc in spatio nominali qrouter-0a4d2420-4b9c-46bd-aec1-86a1ef299abe intersunt;


[heat-admin@overcloud-controller-0 ~]$ sudo  ip netns exec qrouter-0a4d2420-4b9c-46bd-aec1-86a1ef299abe ip route
10.0.1.0/24 dev qr-0c52b15f-8f proto kernel scope link src 10.0.1.254 
10.0.2.0/24 dev qr-92fa49b5-54 proto kernel scope link src 10.0.2.254 
[heat-admin@overcloud-controller-0 ~]$ 

Hoc spatio nominali duas internas continet quas antea creavimus. Ambo portubus virtualibus additi sunt br-int. Inscriptio mac electronica portus qr-0c52b15f-8f comprimamus, cum negotiatio, electronica mac electronica destinata, ad hoc interface ivit.

[heat-admin@overcloud-controller-0 ~]$ sudo  ip netns exec qrouter-0a4d2420-4b9c-46bd-aec1-86a1ef299abe ifconfig qr-0c52b15f-8f
qr-0c52b15f-8f: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1450
        inet 10.0.1.254  netmask 255.255.255.0  broadcast 10.0.1.255
        inet6 fe80::f816:3eff:fec4:6470  prefixlen 64  scopeid 0x20<link>
        ether fa:16:3e:c4:64:70  txqueuelen 1000  (Ethernet)
        RX packets 5356  bytes 427305 (417.2 KiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 5195  bytes 490603 (479.1 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

[heat-admin@overcloud-controller-0 ~]$ 

Hoc est, in hoc casu omnia opera secundum leges fudisset. Cum negotiatio exercitui 10.0.2.8 destinata sit, per qr-92fa49b5-54 exire debet et per vxlan cuniculum ad nodi computandum exire;


[heat-admin@overcloud-controller-0 ~]$ sudo  ip netns exec qrouter-0a4d2420-4b9c-46bd-aec1-86a1ef299abe arp
Address                  HWtype  HWaddress           Flags Mask            Iface
10.0.1.88                ether   fa:16:3e:72:ad:53   C                     qr-0c52b15f-8f
10.0.1.90                ether   fa:16:3e:83:ad:a4   C                     qr-0c52b15f-8f
10.0.2.8                 ether   fa:16:3e:6c:ad:9c   C                     qr-92fa49b5-54
10.0.2.42                ether   fa:16:3e:f5:0b:29   C                     qr-92fa49b5-54
10.0.1.85                ether   fa:16:3e:44:98:20   C                     qr-0c52b15f-8f
[heat-admin@overcloud-controller-0 ~]$ 

Omnia logica sunt, nihil admiror. Videamus ubi inscriptio papaveris hospitis 10.0.2.8 in brutis int:

[heat-admin@overcloud-controller-0 ~]$ sudo ovs-appctl fdb/show br-int | grep fa:16:3e:6c:ad:9c
    2     2  fa:16:3e:6c:ad:9c    1
[heat-admin@overcloud-controller-0 ~]$ 
[heat-admin@overcloud-controller-0 ~]$ sudo ovs-ofctl show br-int | grep addr
 1(int-br-ex): addr:2e:58:b6:db:d5:de
 2(patch-tun): addr:06:41:90:f0:9e:56
 3(tapca25a97e-64): addr:fa:16:3e:e6:2c:5c
 4(tap22015e46-0b): addr:fa:16:3e:76:c2:11
 5(qr-0c52b15f-8f): addr:fa:16:3e:c4:64:70
 6(qr-92fa49b5-54): addr:fa:16:3e:80:13:72
 LOCAL(br-int): addr:06:de:5d:ed:44:44
[heat-admin@overcloud-controller-0 ~]$ 

Negotiatio, ut expectatur, ad br-tun accedit, videamus quem cuniculum mercatus proxime accedit;

[heat-admin@overcloud-controller-0 ~]$ sudo ovs-ofctl dump-flows br-tun | grep fa:16:3e:6c:ad:9c
 cookie=0x2ab04bf27114410e, duration=5346.829s, table=20, n_packets=5248, n_bytes=498512, hard_timeout=300, idle_age=0, hard_age=0, priority=1,vlan_tci=0x0002/0x0fff,dl_dst=fa:16:3e:6c:ad:9c actions=load:0->NXM_OF_VLAN_TCI[],load:0x63->NXM_NX_TUN_ID[],output:2
[heat-admin@overcloud-controller-0 ~]$
[heat-admin@overcloud-controller-0 ~]$ sudo ovs-ofctl show br-tun | grep addr
 1(patch-int): addr:a2:69:00:c5:fa:ba
 2(vxlan-c0a8ff1a): addr:86:f0:ce:d0:e8:ea
 3(vxlan-c0a8ff13): addr:72:aa:73:2c:2e:5b
 LOCAL(br-tun): addr:a6:cb:cd:72:1c:45
[heat-admin@overcloud-controller-0 ~]$ 
[heat-admin@overcloud-controller-0 ~]$ sudo sudo ovs-appctl dpif/show | grep vxlan-c0a8ff1a
    vxlan-c0a8ff1a 2/5: (vxlan: egress_pkt_mark=0, key=flow, local_ip=192.168.255.15, remote_ip=192.168.255.26)
[heat-admin@overcloud-controller-0 ~]$ 

Negotiatio in cuniculum computa-1. Bene, in computa-1 omnia simplicia sunt - e sarcinis pergit ad br-int et inde ad machinam virtualem interfaciei;

[heat-admin@overcloud-controller-0 ~]$ sudo sudo ovs-appctl dpif/show | grep vxlan-c0a8ff1a
    vxlan-c0a8ff1a 2/5: (vxlan: egress_pkt_mark=0, key=flow, local_ip=192.168.255.15, remote_ip=192.168.255.26)
[heat-admin@overcloud-controller-0 ~]$ 
[heat-admin@overcloud-novacompute-1 ~]$ sudo ovs-appctl fdb/show br-int | grep fa:16:3e:6c:ad:9c
    4     2  fa:16:3e:6c:ad:9c    1
[heat-admin@overcloud-novacompute-1 ~]$ sudo ovs-ofctl show br-int | grep addr                  
 1(int-br-ex): addr:8a:d7:f9:ad:8c:1d
 2(patch-tun): addr:46:cc:40:bd:20:da
 3(qvoe7e23f1b-07): addr:12:78:2e:34:6a:c7
 4(qvo3210e8ec-c0): addr:7a:5f:59:75:40:85
 LOCAL(br-int): addr:e2:27:b2:ed:14:46
[heat-admin@overcloud-novacompute-1 ~]$ 

Sit scriptor reprehendo hoc quidem est verum interface:

[heat-admin@overcloud-novacompute-1 ~]$ brctl show
bridge name     bridge id               STP enabled     interfaces
docker0         8000.02429c001e1c       no
qbr3210e8ec-c0          8000.ea27f45358be       no              qvb3210e8ec-c0
                                                        tap3210e8ec-c0
qbre7e23f1b-07          8000.b26ac0eded8a       no              qvbe7e23f1b-07
                                                        tape7e23f1b-07
[heat-admin@overcloud-novacompute-1 ~]$ 
[heat-admin@overcloud-novacompute-1 ~]$ sudo virsh domiflist instance-00000004
Interface  Type       Source     Model       MAC
-------------------------------------------------------
tap3210e8ec-c0 bridge     qbr3210e8ec-c0 virtio      fa:16:3e:6c:ad:9c

[heat-admin@overcloud-novacompute-1 ~]$

Profecto per sarcinam omnes venimus. Animadvertisse reor te negotiationem per diversos cuniculos vxlan exisse et VNIs diversis exisse. Videamus qualia VNI haec sint, post quam TUBER in portum nodi colligemus et fac ut negotiatio fluit quemadmodum supra dictum est.
Cuniculum ergo ad computandum-0 has actiones sequentes = onus: 0->NXM_OF_VLAN_TCI[], onus: 0x16->NXM_NX_TUN_ID[], output:3. Convertamus 0x16 ad punctum numeri ratio:


0x16 = 6*16^0+1*16^1 = 6+16 = 22

Cuniculum ad computandum-1 has sequentia VNI: actiones = onus: 0->NXM_OF_VLAN_TCI[], onus: 0x63->NXM_NX_TUN_ID[], output:2. Convertamus 0x63 ad punctum numeri ratio:


0x63 = 3*16^0+6*16^1 = 3+96 = 99

Age, nunc inspiciamus TUBER;

[root@hp-gen9 bormoglotx]# tcpdump -vvv -i vnet4 
tcpdump: listening on vnet4, link-type EN10MB (Ethernet), capture size 262144 bytes

*****************omitted*******************

04:35:18.709949 IP (tos 0x0, ttl 64, id 48650, offset 0, flags [DF], proto UDP (17), length 134)
    192.168.255.19.41591 > 192.168.255.15.4789: [no cksum] VXLAN, flags [I] (0x08), vni 22
IP (tos 0x0, ttl 64, id 49042, offset 0, flags [DF], proto ICMP (1), length 84)
    10.0.1.85 > 10.0.2.8: ICMP echo request, id 5378, seq 9, length 64
04:35:18.710159 IP (tos 0x0, ttl 64, id 23360, offset 0, flags [DF], proto UDP (17), length 134)
    192.168.255.15.38983 > 192.168.255.26.4789: [no cksum] VXLAN, flags [I] (0x08), vni 99
IP (tos 0x0, ttl 63, id 49042, offset 0, flags [DF], proto ICMP (1), length 84)
    10.0.1.85 > 10.0.2.8: ICMP echo request, id 5378, seq 9, length 64
04:35:18.711292 IP (tos 0x0, ttl 64, id 43596, offset 0, flags [DF], proto UDP (17), length 134)
    192.168.255.26.42588 > 192.168.255.15.4789: [no cksum] VXLAN, flags [I] (0x08), vni 99
IP (tos 0x0, ttl 64, id 55103, offset 0, flags [none], proto ICMP (1), length 84)
    10.0.2.8 > 10.0.1.85: ICMP echo reply, id 5378, seq 9, length 64
04:35:18.711531 IP (tos 0x0, ttl 64, id 8555, offset 0, flags [DF], proto UDP (17), length 134)
    192.168.255.15.38983 > 192.168.255.19.4789: [no cksum] VXLAN, flags [I] (0x08), vni 22
IP (tos 0x0, ttl 63, id 55103, offset 0, flags [none], proto ICMP (1), length 84)
    10.0.2.8 > 10.0.1.85: ICMP echo reply, id 5378, seq 9, length 64
	
*****************omitted*******************

Prima fasciculus est fasciculus vxlan ab hospite 192.168.255.19 ad exercitum 0 (imperium 192.168.255.15) cum vni 1, intra quod fasciculus ICMP ab hospite 22 ad exercitum 10.0.1.85. Ut supra computavimus, vni congruit quod vidimus in output.

Secunda fasciculus est fasciculus vxlan ab hospite 192.168.255.15 (imperium 1) ad exercitum 192.168.255.26 (conputa-1) cum vni 99, intra quod fasciculus ICMP ab hospite 10.0.1.85 ad exercitum 10.0.2.8. Ut supra computavimus, vni congruit quod in output vidimus.

Duo sequentes fasciculi mercaturae reditus ex 10.0.2.8 non 10.0.1.85 sunt.

Hoc est, in fine sequentis nodi schema dicione cepimus;

Introductio ad partem retis nube infrastructure

Simile illud est? Obliti sumus de duobus spatiis nominalibus:

[heat-admin@overcloud-controller-0 ~]$ sudo  ip netns
qrouter-0a4d2420-4b9c-46bd-aec1-86a1ef299abe (id: 2)
qdhcp-7d541e74-1c36-4e1d-a7c4-0968c8dbc638 (id: 1)
qdhcp-67a3798c-32c0-4c18-8502-2531247e3cc2 (id: 0)
[heat-admin@overcloud-controller-0 ~]$ 

Dum locuti sumus de architectura suggesti nubis, bonum esset si machinae inscriptiones reciperentur ab ipso servo DHCP. Hi duo DHCP servientes nostris duobus reticulis 10.0.1.0/24 et 10.0.2.0/24 sunt.

Hoc perscribe verum sit. Inscriptio una tantum in hoc spatio nominali - 10.0.1.1 - inscriptione ipsius servientis DHCP et etiam in br-int comprehenditur:

[heat-admin@overcloud-controller-0 ~]$ sudo ip netns exec qdhcp-67a3798c-32c0-4c18-8502-2531247e3cc2 ifconfig
lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
        inet 127.0.0.1  netmask 255.0.0.0
        inet6 ::1  prefixlen 128  scopeid 0x10<host>
        loop  txqueuelen 1000  (Local Loopback)
        RX packets 1  bytes 28 (28.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 1  bytes 28 (28.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

tapca25a97e-64: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1450
        inet 10.0.1.1  netmask 255.255.255.0  broadcast 10.0.1.255
        inet6 fe80::f816:3eff:fee6:2c5c  prefixlen 64  scopeid 0x20<link>
        ether fa:16:3e:e6:2c:5c  txqueuelen 1000  (Ethernet)
        RX packets 129  bytes 9372 (9.1 KiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 49  bytes 6154 (6.0 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

Videamus an processus qdhcp-67a3798c-32c0-4c18-8502-2531247e3cc2 continentur in eorum nomine in nodi potestate:


[heat-admin@overcloud-controller-0 ~]$ ps -aux | egrep qdhcp-7d541e74-1c36-4e1d-a7c4-0968c8dbc638 
root      640420  0.0  0.0   4220   348 ?        Ss   11:31   0:00 dumb-init --single-child -- ip netns exec qdhcp-7d541e74-1c36-4e1d-a7c4-0968c8dbc638 /usr/sbin/dnsmasq -k --no-hosts --no-resolv --pid-file=/var/lib/neutron/dhcp/7d541e74-1c36-4e1d-a7c4-0968c8dbc638/pid --dhcp-hostsfile=/var/lib/neutron/dhcp/7d541e74-1c36-4e1d-a7c4-0968c8dbc638/host --addn-hosts=/var/lib/neutron/dhcp/7d541e74-1c36-4e1d-a7c4-0968c8dbc638/addn_hosts --dhcp-optsfile=/var/lib/neutron/dhcp/7d541e74-1c36-4e1d-a7c4-0968c8dbc638/opts --dhcp-leasefile=/var/lib/neutron/dhcp/7d541e74-1c36-4e1d-a7c4-0968c8dbc638/leases --dhcp-match=set:ipxe,175 --local-service --bind-dynamic --dhcp-range=set:subnet-335552dd-b35b-456b-9df0-5aac36a3ca13,10.0.2.0,static,255.255.255.0,86400s --dhcp-option-force=option:mtu,1450 --dhcp-lease-max=256 --conf-file= --domain=openstacklocal
heat-ad+  951620  0.0  0.0 112944   980 pts/0    S+   18:50   0:00 grep -E --color=auto qdhcp-7d541e74-1c36-4e1d-a7c4-0968c8dbc638
[heat-admin@overcloud-controller-0 ~]$ 

Talis processus est et fundatus ex informationibus in output supra propositis, exempli gratia videre possumus quid nunc pro redditu habemus:

[heat-admin@overcloud-controller-0 ~]$ cat /var/lib/neutron/dhcp/7d541e74-1c36-4e1d-a7c4-0968c8dbc638/leases
1597492111 fa:16:3e:6c:ad:9c 10.0.2.8 host-10-0-2-8 01:fa:16:3e:6c:ad:9c
1597491115 fa:16:3e:76:c2:11 10.0.2.1 host-10-0-2-1 *
[heat-admin@overcloud-controller-0 ~]$

Quam ob rem sequentia officia in nodi potestate consequimur:

Introductio ad partem retis nube infrastructure

Bene meminerimus - hoc iustum est 4 machinis, 2 retiacula interna et unum iter rectum... Retia externa nunc hic non habemus, fasciculum inceptis diversis, unumquodque cum suis reticulis (imbricatis), et habemus. Iter distributum avertit, et in fine Post omnes, una tantum nodi in consilio tribunali moderatio fuit (nam culpa tolerantiae trium nodis esse debet). Consentaneum est quod in mercatura omnia "parum" magis perplexa sunt, sed in hoc simplici exemplo intellegimus quomodo operari debeat - utrum 3 an 300 nomina spatii habeas, sane interest, sed ex parte totius operationis. compages, nihil multum mutabit... donec tu non plug in venditore aliquo SDN. Sed haec omnino alia fabula est.

Spero esse interesting. Si commentationes/additiones habes, vel alicubi me omnino mentitus sum (homo sum et mea sententia semper erit subiectiva) - scribe quid corrigendum/addendum sit - corrigemus/adjiciemus omnia.

In fine, pauca dicere vellem de comparatione Openstack (utriusque vanillae et venditoris) cum solutione nubis ex VMWare - quaestionem hanc nimis saepe interrogatus sum de praeteritis duobus annis et, libere loquendo, sum. iam taedet, sed tamen. Opinor, difficillimum est has duas solutiones comparare, sed definite possumus dicere incommoda in utraque solutione esse, et eligens unam solutionem debes pensare pros et cons.

Si OpenStack est solutio communitatis agitatae, VMWare ius habet ad faciendum solum quod vult (lege - quod utile est) et hoc logicum est - quia societas commercii est quae a clientibus suis pecuniam lucrari solet. Sed unus magnus et pinguis est VERUM - OpenStack eximere potes, exempli gratia Nokia, et parvo sumptu transibit ad solutionem ab, exempli gratia, Juniperus (Contrail Cloud), sed abhorret a VMWare dimittunt. . Mihi, hae duae solutiones simile hoc spectant - Openstack (vendor) est cavea simplex in qua es pones, sed clavem habes et quovis tempore relinquere potes. VMWare cavea aurea est, dominus clavem caveae habet et multum tibi constabit.

Non promoveo vel primum productum vel secundum - elige quod tibi necessarium est. Sed si talem electionem haberem, utramque solutionem eligerem - VMWare pro nube IT (inmissa onera, facile procuratio), OpenStack a venditore aliquo (Nokia et Juniperus optimas solutiones turnkey) - pro nube Telecom praebent. Nolo Openstack puro IT uti - passerum sagittarum cum tormento simile est, sed contraindicationes quaslibet non video utendo alia quam redundantia. Autem, VMWare in telecom utens est sicut lapis fractus in Ford Raptor vecturis - pulcher ab extra est, sed auriga X itinera pro uno facere habet.

Opinor, maximum incommodum VMWare est eius clausura completa - societas nihil tibi dabit informationem quomodo operatur, exempli gratia, vSAN vel quid in nucleo hypervisoris - simpliciter non prodest - id est, vis. numquam peritus in VMWare - sine venditoris auxilio, periisti (saepe occurrunt peritis VMWare, qui levibus quaestionibus elusi sunt). Pro me, VMWare currum cum cucullo clausum emit - ita, ut artifices habeas qui cingulum leo mutare possunt, sed solus ille qui hanc solutionem tibi vendidit, cucullo aperire potest. Personaliter, solutiones quas non aptare non placet. Dices ne sub cucullo debeas ire. Ita hoc fieri potest, sed te intuebor cum opus magnum munus in nube convenire ex machinis 20-30 virtualis, 40-50 reticulis, quarum dimidium foras exire vis, et medium petit. acceleratio SR-IOV, alioquin plus duobus duodecim horum carrorum debes - aliter perficientur satis non erit.

Aliae sunt opiniones, ut solum statuere possis quid eligat ac, praesertim, electionis tuae reus eris. Haec opinio iusta est - homo qui vidit et tetigit saltem 4 productos - Nokia, Juniperus, Red Hat et VMWare. Id est, comparare aliquid habeo.

Source: www.habr.com

Add a comment