TL;DR: PÄrskata raksts ā ceļvedis vides salÄ«dzinÄÅ”anai lietojumprogrammu darbinÄÅ”anai konteineros. Tiks izskatÄ«tas Docker un citu lÄ«dzÄ«gu sistÄmu iespÄjas.
Neliela vÄsture, no kurienes tas viss nÄcis
StÄsts
Pirmais labi zinÄmais lietojumprogrammas izolÄÅ”anas veids ir chroot. TÄda paÅ”a nosaukuma sistÄmas izsaukums nodroÅ”ina izmaiÅas saknes direktorijÄ - tÄdÄjÄdi nodroÅ”inot piekļuvi programmai, kas to izsauca, piekļuvi tikai failiem Å”ajÄ direktorijÄ. Bet, ja programmai tiek pieŔķirtas superlietotÄja tiesÄ«bas, tÄ potenciÄli var "aizbÄgt" no chroot un iegÅ«t piekļuvi galvenajai operÄtÄjsistÄmai. TÄpat papildus saknes direktorija maiÅai nav ierobežoti arÄ« citi resursi (RAM, procesors), kÄ arÄ« piekļuve tÄ«klam.
NÄkamais veids ir palaist pilnvÄrtÄ«gu operÄtÄjsistÄmu konteinera iekÅ”pusÄ, izmantojot operÄtÄjsistÄmas kodola mehÄnismus. Å o metodi dažÄdÄs operÄtÄjsistÄmÄs sauc atŔķirÄ«gi, taÄu bÅ«tÄ«ba ir viena ā darbojas vairÄkas neatkarÄ«gas operÄtÄjsistÄmas, no kurÄm katra darbojas vienÄ kodolÄ, kurÄ darbojas galvenÄ operÄtÄjsistÄma. Tas ietver FreeBSD Jails, Solaris zonas, OpenVZ un LXC operÄtÄjsistÄmai Linux. IzolÄcija tiek nodroÅ”inÄta ne tikai diska vietai, bet arÄ« citiem resursiem, jo āāÄ«paÅ”i katram konteineram var bÅ«t ierobežojumi procesora laikam, RAM, tÄ«kla joslas platumam. SalÄ«dzinot ar chroot, konteinera atstÄÅ”ana ir grÅ«tÄka, jo konteinera superlietotÄjam ir piekļuve tikai konteinera iekÅ”pusei, tomÄr tÄpÄc, ka konteinerÄ ir jÄatjaunina operÄtÄjsistÄma un tiek izmantots vecs kodols. versijÄm (attiecas uz Linux, mazÄkÄ mÄrÄ FreeBSD), pastÄv nulles varbÅ«tÄ«ba āizlauzties cauriā kodola izolÄcijas sistÄmai un piekļūt galvenajai operÄtÄjsistÄmai.
TÄ vietÄ, lai konteinerÄ palaistu pilnvÄrtÄ«gu operÄtÄjsistÄmu (ar inicializÄcijas sistÄmu, pakotÅu pÄrvaldnieku utt.), lietojumprogrammas var palaist nekavÄjoties, galvenais ir nodroÅ”inÄt lietojumprogrammÄm Å”o iespÄju (vajadzÄ«go bibliotÄku klÄtbÅ«tne un citi faili). Å Ä« ideja kalpoja par pamatu konteinerizÄtajai lietojumprogrammu virtualizÄcijai, kuras redzamÄkais un pazÄ«stamÄkais pÄrstÄvis ir Docker. SalÄ«dzinÄjumÄ ar iepriekÅ”ÄjÄm sistÄmÄm, elastÄ«gÄki izolÄcijas mehÄnismi, kÄ arÄ« iebÅ«vÄts atbalsts virtuÄlajiem tÄ«kliem starp konteineriem un lietojumprogrammas statusu konteinera iekÅ”ienÄ, ļÄva izveidot vienotu holistisku vidi no liela skaita fizisko serveru, lai darbinÄtu konteinerus ā bez nepiecieÅ”amÄ«ba pÄc manuÄlas resursu pÄrvaldÄ«bas.
dokers
Docker ir vispazÄ«stamÄkÄ lietojumprogrammu konteinerizÄcijas programmatÅ«ra. RakstÄ«ts Go valodÄ, tas izmanto parastÄs Linux kodola iespÄjas ā cgroups, namespaces, iespÄjas utt., kÄ arÄ« Aufs failu sistÄmas un citas lÄ«dzÄ«gas, lai ietaupÄ«tu vietu diskÄ.
Avots: wikimedia
Arhitektūra
Pirms versijas 1.11 Docker strÄdÄja kÄ vienots pakalpojums, kas veica visas darbÄ«bas ar konteineriem: konteineru attÄlu lejupielÄdi, konteineru palaiÅ”anu, API pieprasÄ«jumu apstrÄdi. KopÅ” versijas 1.11 Docker ir sadalÄ«ts vairÄkÄs daļÄs, kas mijiedarbojas viena ar otru: konteineros, lai apstrÄdÄtu visu konteineru dzÄ«ves ciklu (diska vietas pieŔķirÅ”ana, attÄlu lejupielÄde, tÄ«kla izveide, palaiÅ”ana, instalÄÅ”ana un konteineru stÄvokļa uzraudzÄ«ba) un runC. , konteinera izpildlaiki, pamatojoties uz cgroups un citu Linux kodola lÄ«dzekļu izmantoÅ”anu. Pats doka pakalpojums paliek, taÄu tagad tas kalpo tikai API pieprasÄ«jumu apstrÄdei, kas tiek pÄrraidÄ«ti konteineros.
UzstÄdÄ«Å”ana un konfigurÄÅ”ana
Mans iecienÄ«tÄkais docker instalÄÅ”anas veids ir docker-machine, kas papildus tieÅ”ai docker instalÄÅ”anai un konfigurÄÅ”anai attÄlos serveros (arÄ« dažÄdos mÄkoÅos), ļauj strÄdÄt ar attÄlo serveru failu sistÄmÄm, kÄ arÄ« var palaist dažÄdas komandas.
TaÄu kopÅ” 2018. gada projekts gandrÄ«z nav izstrÄdÄts, tÄpÄc instalÄsim to ierastajÄ veidÄ lielÄkajai daļai Linux distribÅ«ciju ā pievienojot repozitoriju un uzstÄdot nepiecieÅ”amÄs pakotnes.
Å o metodi izmanto arÄ« automatizÄtai instalÄÅ”anai, piemÄram, izmantojot Ansible vai citas lÄ«dzÄ«gas sistÄmas, taÄu Å”ajÄ rakstÄ es to neapskatÄ«Å”u.
InstalÄÅ”ana tiks veikta Centos 7, es izmantoÅ”u virtuÄlo maŔīnu kÄ serveri, lai instalÄtu, vienkÄrÅ”i palaidiet tÄlÄk norÄdÄ«tÄs komandas:
# yum install -y yum-utils
# yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
# yum install docker-ce docker-ce-cli containerd.io
PÄc instalÄÅ”anas jums jÄsÄk pakalpojums, jÄievieto automÄtiskÄ ielÄde:
# systemctl enable docker
# systemctl start docker
# firewall-cmd --zone=public --add-port=2377/tcp --permanent
TurklÄt jÅ«s varat izveidot docker grupu, kuras lietotÄji varÄs strÄdÄt ar docker bez sudo, iestatÄ«t reÄ£istrÄÅ”anu, iespÄjot piekļuvi API no Ärpuses, neaizmirstiet precÄ«zi noregulÄt ugunsmÅ«ri (viss, kas nav atļauts, ir aizliegts augstÄk un zemÄk esoÅ”ajos piemÄros - es to izlaidu vienkÄrŔības un vizualizÄcijas labad), bet es Å”eit neiedziļinÄÅ”os.
Citas funkcijas
Papildus iepriekÅ” minÄtajai doka maŔīnai ir arÄ« doku reÄ£istrs, rÄ«ks attÄlu glabÄÅ”anai konteineriem, kÄ arÄ« docker compose - rÄ«ks lietojumprogrammu izvietoÅ”anas automatizÄÅ”anai konteineros, YAML faili tiek izmantoti konteineru veidoÅ”anai un konfigurÄÅ”anai un citas saistÄ«tas lietas (piemÄram, tÄ«kli, pastÄvÄ«gas failu sistÄmas datu glabÄÅ”anai).
To var izmantot arÄ« CICD cauruļvadu organizÄÅ”anai. VÄl viena interesanta funkcija ir darbs klastera režīmÄ, tÄ sauktajÄ spieta režīmÄ (pirms versijas 1.12 tas bija pazÄ«stams kÄ docker swarm), kas ļauj no vairÄkiem serveriem apkopot vienu infrastruktÅ«ru, lai darbinÄtu konteinerus. Virs visiem serveriem ir atbalsts virtuÄlajam tÄ«klam, ir iebÅ«vÄts slodzes balansÄtÄjs, kÄ arÄ« atbalsts konteineru noslÄpumiem.
YAML failus no docker compose var izmantot Å”Ädiem klasteriem ar nelielÄm modifikÄcijÄm, pilnÄ«bÄ automatizÄjot mazu un vidÄju klasteru uzturÄÅ”anu dažÄdiem mÄrÄ·iem. Lieliem klasteriem ir vÄlams izmantot Kubernetes, jo spieta režīma uzturÄÅ”anas izmaksas var pÄrsniegt Kubernetes izmaksas. Papildus runC kÄ konteineru izpildes vidi varat instalÄt, piemÄram
Darbs ar Docker
PÄc instalÄÅ”anas un konfigurÄÅ”anas mÄs mÄÄ£inÄsim izveidot klasteru, kurÄ izstrÄdes komandai izvietosim GitLab un Docker Registry. KÄ serverus izmantoÅ”u trÄ«s virtuÄlÄs maŔīnas, uz kurÄm papildus izvietoÅ”u GlusterFS izkliedÄto FS, izmantoÅ”u kÄ docker volumes krÄtuvi, piemÄram, lai palaistu nevainojamu docker reÄ£istra versiju. GalvenÄs palaiÅ”anas sastÄvdaļas: Docker Registry, Postgresql, Redis, GitLab ar GitLab Runner atbalstu Swarm. MÄs palaižam Postgresql ar klasterizÄciju
Lai izvietotu GlusterFS visos serveros (tos sauc par node1, node2, node3), jums ir jÄinstalÄ pakotnes, jÄiespÄjo ugunsmÅ«ris, jÄizveido nepiecieÅ”amie direktoriji:
# yum -y install centos-release-gluster7
# yum -y install glusterfs-server
# systemctl enable glusterd
# systemctl start glusterd
# firewall-cmd --add-service=glusterfs --permanent
# firewall-cmd --reload
# mkdir -p /srv/gluster
# mkdir -p /srv/docker
# echo "$(hostname):/docker /srv/docker glusterfs defaults,_netdev 0 0" >> /etc/fstab
PÄc instalÄÅ”anas darbs pie GlusterFS konfigurÄÅ”anas jÄturpina no viena mezgla, piemÄram, node1:
# gluster peer probe node2
# gluster peer probe node3
# gluster volume create docker replica 3 node1:/srv/gluster node2:/srv/gluster node3:/srv/gluster force
# gluster volume start docker
PÄc tam jums jÄpievieno iegÅ«tais sÄjums (komanda ir jÄpalaiž visos serveros):
# mount /srv/docker
Swarm režīms ir konfigurÄts vienÄ no serveriem, kas bÅ«s Leader, pÄrÄjiem bÅ«s jÄpievienojas klasterim, tÄpÄc komandas palaiÅ”anas rezultÄts pirmajÄ serverÄ« bÅ«s jÄpÄrkopÄ un jÄizpilda pÄrÄjÄ.
SÄkotnÄjÄ klastera iestatÄ«Å”ana, es izpildu komandu mezglÄ1:
# docker swarm init
Swarm initialized: current node (a5jpfrh5uvo7svzz1ajduokyq) is now a manager.
To add a worker to this swarm, run the following command:
docker swarm join --token SWMTKN-1-0c5mf7mvzc7o7vjk0wngno2dy70xs95tovfxbv4tqt9280toku-863hyosdlzvd76trfptd4xnzd xx.xx.xx.xx:2377
To add a manager to this swarm, run 'docker swarm join-token manager' and follow the instructions.
# docker swarm join-token manager
KopÄjiet otrÄs komandas rezultÄtu, izpildiet node2 un node3:
# docker swarm join --token SWMTKN-x-xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx-xxxxxxxxx xx.xx.xx.xx:2377
This node joined a swarm as a manager.
TÄdÄjÄdi tiek pabeigta serveru sÄkotnÄjÄ konfigurÄÅ”ana, sÄksim konfigurÄt pakalpojumus, izpildÄmÄs komandas tiks palaistas no node1, ja nav norÄdÄ«ts citÄdi.
Vispirms izveidosim tīklus konteineriem:
# docker network create --driver=overlay etcd
# docker network create --driver=overlay pgsql
# docker network create --driver=overlay redis
# docker network create --driver=overlay traefik
# docker network create --driver=overlay gitlab
PÄc tam mÄs atzÄ«mÄjam serverus, tas ir nepiecieÅ”ams, lai saistÄ«tu dažus pakalpojumus ar serveriem:
# docker node update --label-add nodename=node1 node1
# docker node update --label-add nodename=node2 node2
# docker node update --label-add nodename=node3 node3
TÄlÄk mÄs izveidojam direktorijus etcd datu glabÄÅ”anai, KV krÄtuvi, kas nepiecieÅ”ama Traefik un Stolon. LÄ«dzÄ«gi kÄ Postgresql, tie bÅ«s konteineri, kas saistÄ«ti ar serveriem, tÄpÄc mÄs izpildÄm Å”o komandu visos serveros:
# mkdir -p /srv/etcd
PÄc tam izveidojiet failu etcd konfigurÄÅ”anai un lietojiet to:
00etcd.yml
version: '3.7'
services:
etcd1:
image: quay.io/coreos/etcd:latest
hostname: etcd1
command:
- etcd
- --name=etcd1
- --data-dir=/data.etcd
- --advertise-client-urls=http://etcd1:2379
- --listen-client-urls=http://0.0.0.0:2379
- --initial-advertise-peer-urls=http://etcd1:2380
- --listen-peer-urls=http://0.0.0.0:2380
- --initial-cluster=etcd1=http://etcd1:2380,etcd2=http://etcd2:2380,etcd3=http://etcd3:2380
- --initial-cluster-state=new
- --initial-cluster-token=etcd-cluster
networks:
- etcd
volumes:
- etcd1vol:/data.etcd
deploy:
replicas: 1
placement:
constraints: [node.labels.nodename == node1]
etcd2:
image: quay.io/coreos/etcd:latest
hostname: etcd2
command:
- etcd
- --name=etcd2
- --data-dir=/data.etcd
- --advertise-client-urls=http://etcd2:2379
- --listen-client-urls=http://0.0.0.0:2379
- --initial-advertise-peer-urls=http://etcd2:2380
- --listen-peer-urls=http://0.0.0.0:2380
- --initial-cluster=etcd1=http://etcd1:2380,etcd2=http://etcd2:2380,etcd3=http://etcd3:2380
- --initial-cluster-state=new
- --initial-cluster-token=etcd-cluster
networks:
- etcd
volumes:
- etcd2vol:/data.etcd
deploy:
replicas: 1
placement:
constraints: [node.labels.nodename == node2]
etcd3:
image: quay.io/coreos/etcd:latest
hostname: etcd3
command:
- etcd
- --name=etcd3
- --data-dir=/data.etcd
- --advertise-client-urls=http://etcd3:2379
- --listen-client-urls=http://0.0.0.0:2379
- --initial-advertise-peer-urls=http://etcd3:2380
- --listen-peer-urls=http://0.0.0.0:2380
- --initial-cluster=etcd1=http://etcd1:2380,etcd2=http://etcd2:2380,etcd3=http://etcd3:2380
- --initial-cluster-state=new
- --initial-cluster-token=etcd-cluster
networks:
- etcd
volumes:
- etcd3vol:/data.etcd
deploy:
replicas: 1
placement:
constraints: [node.labels.nodename == node3]
volumes:
etcd1vol:
driver: local
driver_opts:
type: none
o: bind
device: "/srv/etcd"
etcd2vol:
driver: local
driver_opts:
type: none
o: bind
device: "/srv/etcd"
etcd3vol:
driver: local
driver_opts:
type: none
o: bind
device: "/srv/etcd"
networks:
etcd:
external: true
# docker stack deploy --compose-file 00etcd.yml etcd
PÄc kÄda laika mÄs pÄrbaudÄm, vai etcd klasteris ir izveidots:
# docker exec $(docker ps | awk '/etcd/ {print $1}') etcdctl member list
ade526d28b1f92f7: name=etcd1 peerURLs=http://etcd1:2380 clientURLs=http://etcd1:2379 isLeader=false
bd388e7810915853: name=etcd3 peerURLs=http://etcd3:2380 clientURLs=http://etcd3:2379 isLeader=false
d282ac2ce600c1ce: name=etcd2 peerURLs=http://etcd2:2380 clientURLs=http://etcd2:2379 isLeader=true
# docker exec $(docker ps | awk '/etcd/ {print $1}') etcdctl cluster-health
member ade526d28b1f92f7 is healthy: got healthy result from http://etcd1:2379
member bd388e7810915853 is healthy: got healthy result from http://etcd3:2379
member d282ac2ce600c1ce is healthy: got healthy result from http://etcd2:2379
cluster is healthy
Izveidojiet Postgresql direktorijus, izpildiet komandu visos serveros:
# mkdir -p /srv/pgsql
PÄc tam izveidojiet failu, lai konfigurÄtu Postgresql:
01pgsql.yml
version: '3.7'
services:
pgsentinel:
image: sorintlab/stolon:master-pg10
command:
- gosu
- stolon
- stolon-sentinel
- --cluster-name=stolon-cluster
- --store-backend=etcdv3
- --store-endpoints=http://etcd1:2379,http://etcd2:2379,http://etcd3:2379
- --log-level=debug
networks:
- etcd
- pgsql
deploy:
replicas: 3
update_config:
parallelism: 1
delay: 30s
order: stop-first
failure_action: pause
pgkeeper1:
image: sorintlab/stolon:master-pg10
hostname: pgkeeper1
command:
- gosu
- stolon
- stolon-keeper
- --pg-listen-address=pgkeeper1
- --pg-repl-username=replica
- --uid=pgkeeper1
- --pg-su-username=postgres
- --pg-su-passwordfile=/run/secrets/pgsql
- --pg-repl-passwordfile=/run/secrets/pgsql_repl
- --data-dir=/var/lib/postgresql/data
- --cluster-name=stolon-cluster
- --store-backend=etcdv3
- --store-endpoints=http://etcd1:2379,http://etcd2:2379,http://etcd3:2379
networks:
- etcd
- pgsql
environment:
- PGDATA=/var/lib/postgresql/data
volumes:
- pgkeeper1:/var/lib/postgresql/data
secrets:
- pgsql
- pgsql_repl
deploy:
replicas: 1
placement:
constraints: [node.labels.nodename == node1]
pgkeeper2:
image: sorintlab/stolon:master-pg10
hostname: pgkeeper2
command:
- gosu
- stolon
- stolon-keeper
- --pg-listen-address=pgkeeper2
- --pg-repl-username=replica
- --uid=pgkeeper2
- --pg-su-username=postgres
- --pg-su-passwordfile=/run/secrets/pgsql
- --pg-repl-passwordfile=/run/secrets/pgsql_repl
- --data-dir=/var/lib/postgresql/data
- --cluster-name=stolon-cluster
- --store-backend=etcdv3
- --store-endpoints=http://etcd1:2379,http://etcd2:2379,http://etcd3:2379
networks:
- etcd
- pgsql
environment:
- PGDATA=/var/lib/postgresql/data
volumes:
- pgkeeper2:/var/lib/postgresql/data
secrets:
- pgsql
- pgsql_repl
deploy:
replicas: 1
placement:
constraints: [node.labels.nodename == node2]
pgkeeper3:
image: sorintlab/stolon:master-pg10
hostname: pgkeeper3
command:
- gosu
- stolon
- stolon-keeper
- --pg-listen-address=pgkeeper3
- --pg-repl-username=replica
- --uid=pgkeeper3
- --pg-su-username=postgres
- --pg-su-passwordfile=/run/secrets/pgsql
- --pg-repl-passwordfile=/run/secrets/pgsql_repl
- --data-dir=/var/lib/postgresql/data
- --cluster-name=stolon-cluster
- --store-backend=etcdv3
- --store-endpoints=http://etcd1:2379,http://etcd2:2379,http://etcd3:2379
networks:
- etcd
- pgsql
environment:
- PGDATA=/var/lib/postgresql/data
volumes:
- pgkeeper3:/var/lib/postgresql/data
secrets:
- pgsql
- pgsql_repl
deploy:
replicas: 1
placement:
constraints: [node.labels.nodename == node3]
postgresql:
image: sorintlab/stolon:master-pg10
command: gosu stolon stolon-proxy --listen-address 0.0.0.0 --cluster-name stolon-cluster --store-backend=etcdv3 --store-endpoints http://etcd1:2379,http://etcd2:2379,http://etcd3:2379
networks:
- etcd
- pgsql
deploy:
replicas: 3
update_config:
parallelism: 1
delay: 30s
order: stop-first
failure_action: rollback
volumes:
pgkeeper1:
driver: local
driver_opts:
type: none
o: bind
device: "/srv/pgsql"
pgkeeper2:
driver: local
driver_opts:
type: none
o: bind
device: "/srv/pgsql"
pgkeeper3:
driver: local
driver_opts:
type: none
o: bind
device: "/srv/pgsql"
secrets:
pgsql:
file: "/srv/docker/postgres"
pgsql_repl:
file: "/srv/docker/replica"
networks:
etcd:
external: true
pgsql:
external: true
MÄs Ä£enerÄjam noslÄpumus, lietojam failu:
# </dev/urandom tr -dc 234567890qwertyuopasdfghjkzxcvbnmQWERTYUPASDFGHKLZXCVBNM | head -c $(((RANDOM%3)+15)) > /srv/docker/replica
# </dev/urandom tr -dc 234567890qwertyuopasdfghjkzxcvbnmQWERTYUPASDFGHKLZXCVBNM | head -c $(((RANDOM%3)+15)) > /srv/docker/postgres
# docker stack deploy --compose-file 01pgsql.yml pgsql
PÄc kÄda laika (apskatiet komandas izvadi dokera serviss lska visi pakalpojumi ir pieauguÅ”i) inicializÄ Postgresql klasteru:
# docker exec $(docker ps | awk '/pgkeeper/ {print $1}') stolonctl --cluster-name=stolon-cluster --store-backend=etcdv3 --store-endpoints=http://etcd1:2379,http://etcd2:2379,http://etcd3:2379 init
Postgresql klastera gatavÄ«bas pÄrbaude:
# docker exec $(docker ps | awk '/pgkeeper/ {print $1}') stolonctl --cluster-name=stolon-cluster --store-backend=etcdv3 --store-endpoints=http://etcd1:2379,http://etcd2:2379,http://etcd3:2379 status
=== Active sentinels ===
ID LEADER
26baa11d false
74e98768 false
a8cb002b true
=== Active proxies ===
ID
4d233826
9f562f3b
b0c79ff1
=== Keepers ===
UID HEALTHY PG LISTENADDRESS PG HEALTHY PG WANTEDGENERATION PG CURRENTGENERATION
pgkeeper1 true pgkeeper1:5432 true 2 2
pgkeeper2 true pgkeeper2:5432 true 2 2
pgkeeper3 true pgkeeper3:5432 true 3 3
=== Cluster Info ===
Master Keeper: pgkeeper3
===== Keepers/DB tree =====
pgkeeper3 (master)
āāpgkeeper2
āāpgkeeper1
MÄs konfigurÄjam traefik, lai atvÄrtu piekļuvi konteineriem no Ärpuses:
03traefik.yml
version: '3.7'
services:
traefik:
image: traefik:latest
command: >
--log.level=INFO
--providers.docker=true
--entryPoints.web.address=:80
--providers.providersThrottleDuration=2
--providers.docker.watch=true
--providers.docker.swarmMode=true
--providers.docker.swarmModeRefreshSeconds=15s
--providers.docker.exposedbydefault=false
--accessLog.bufferingSize=0
--api=true
--api.dashboard=true
--api.insecure=true
networks:
- traefik
ports:
- 80:80
volumes:
- /var/run/docker.sock:/var/run/docker.sock
deploy:
replicas: 3
placement:
constraints:
- node.role == manager
preferences:
- spread: node.id
labels:
- traefik.enable=true
- traefik.http.routers.traefik.rule=Host(`traefik.example.com`)
- traefik.http.services.traefik.loadbalancer.server.port=8080
- traefik.docker.network=traefik
networks:
traefik:
external: true
# docker stack deploy --compose-file 03traefik.yml traefik
MÄs startÄjam Redis Cluster, Å”im nolÅ«kam visos mezglos izveidojam krÄtuves direktoriju:
# mkdir -p /srv/redis
05redis.yml
version: '3.7'
services:
redis-master:
image: 'bitnami/redis:latest'
networks:
- redis
ports:
- '6379:6379'
environment:
- REDIS_REPLICATION_MODE=master
- REDIS_PASSWORD=xxxxxxxxxxx
deploy:
mode: global
restart_policy:
condition: any
volumes:
- 'redis:/opt/bitnami/redis/etc/'
redis-replica:
image: 'bitnami/redis:latest'
networks:
- redis
ports:
- '6379'
depends_on:
- redis-master
environment:
- REDIS_REPLICATION_MODE=slave
- REDIS_MASTER_HOST=redis-master
- REDIS_MASTER_PORT_NUMBER=6379
- REDIS_MASTER_PASSWORD=xxxxxxxxxxx
- REDIS_PASSWORD=xxxxxxxxxxx
deploy:
mode: replicated
replicas: 3
update_config:
parallelism: 1
delay: 10s
restart_policy:
condition: any
redis-sentinel:
image: 'bitnami/redis:latest'
networks:
- redis
ports:
- '16379'
depends_on:
- redis-master
- redis-replica
entrypoint: |
bash -c 'bash -s <<EOF
"/bin/bash" -c "cat <<EOF > /opt/bitnami/redis/etc/sentinel.conf
port 16379
dir /tmp
sentinel monitor master-node redis-master 6379 2
sentinel down-after-milliseconds master-node 5000
sentinel parallel-syncs master-node 1
sentinel failover-timeout master-node 5000
sentinel auth-pass master-node xxxxxxxxxxx
sentinel announce-ip redis-sentinel
sentinel announce-port 16379
EOF"
"/bin/bash" -c "redis-sentinel /opt/bitnami/redis/etc/sentinel.conf"
EOF'
deploy:
mode: global
restart_policy:
condition: any
volumes:
redis:
driver: local
driver_opts:
type: 'none'
o: 'bind'
device: "/srv/redis"
networks:
redis:
external: true
# docker stack deploy --compose-file 05redis.yml redis
Pievienojiet Docker reģistru:
06registry.yml
version: '3.7'
services:
registry:
image: registry:2.6
networks:
- traefik
volumes:
- registry_data:/var/lib/registry
deploy:
replicas: 1
placement:
constraints: [node.role == manager]
restart_policy:
condition: on-failure
labels:
- traefik.enable=true
- traefik.http.routers.registry.rule=Host(`registry.example.com`)
- traefik.http.services.registry.loadbalancer.server.port=5000
- traefik.docker.network=traefik
volumes:
registry_data:
driver: local
driver_opts:
type: none
o: bind
device: "/srv/docker/registry"
networks:
traefik:
external: true
# mkdir /srv/docker/registry
# docker stack deploy --compose-file 06registry.yml registry
Un visbeidzot - GitLab:
08gitlab-runner.yml
version: '3.7'
services:
gitlab:
image: gitlab/gitlab-ce:latest
networks:
- pgsql
- redis
- traefik
- gitlab
ports:
- 22222:22
environment:
GITLAB_OMNIBUS_CONFIG: |
postgresql['enable'] = false
redis['enable'] = false
gitlab_rails['registry_enabled'] = false
gitlab_rails['db_username'] = "gitlab"
gitlab_rails['db_password'] = "XXXXXXXXXXX"
gitlab_rails['db_host'] = "postgresql"
gitlab_rails['db_port'] = "5432"
gitlab_rails['db_database'] = "gitlab"
gitlab_rails['db_adapter'] = 'postgresql'
gitlab_rails['db_encoding'] = 'utf8'
gitlab_rails['redis_host'] = 'redis-master'
gitlab_rails['redis_port'] = '6379'
gitlab_rails['redis_password'] = 'xxxxxxxxxxx'
gitlab_rails['smtp_enable'] = true
gitlab_rails['smtp_address'] = "smtp.yandex.ru"
gitlab_rails['smtp_port'] = 465
gitlab_rails['smtp_user_name'] = "[email protected]"
gitlab_rails['smtp_password'] = "xxxxxxxxx"
gitlab_rails['smtp_domain'] = "example.com"
gitlab_rails['gitlab_email_from'] = '[email protected]'
gitlab_rails['smtp_authentication'] = "login"
gitlab_rails['smtp_tls'] = true
gitlab_rails['smtp_enable_starttls_auto'] = true
gitlab_rails['smtp_openssl_verify_mode'] = 'peer'
external_url 'http://gitlab.example.com/'
gitlab_rails['gitlab_shell_ssh_port'] = 22222
volumes:
- gitlab_conf:/etc/gitlab
- gitlab_logs:/var/log/gitlab
- gitlab_data:/var/opt/gitlab
deploy:
mode: replicated
replicas: 1
placement:
constraints:
- node.role == manager
labels:
- traefik.enable=true
- traefik.http.routers.gitlab.rule=Host(`gitlab.example.com`)
- traefik.http.services.gitlab.loadbalancer.server.port=80
- traefik.docker.network=traefik
gitlab-runner:
image: gitlab/gitlab-runner:latest
networks:
- gitlab
volumes:
- gitlab_runner_conf:/etc/gitlab
- /var/run/docker.sock:/var/run/docker.sock
deploy:
mode: replicated
replicas: 1
placement:
constraints:
- node.role == manager
volumes:
gitlab_conf:
driver: local
driver_opts:
type: none
o: bind
device: "/srv/docker/gitlab/conf"
gitlab_logs:
driver: local
driver_opts:
type: none
o: bind
device: "/srv/docker/gitlab/logs"
gitlab_data:
driver: local
driver_opts:
type: none
o: bind
device: "/srv/docker/gitlab/data"
gitlab_runner_conf:
driver: local
driver_opts:
type: none
o: bind
device: "/srv/docker/gitlab/runner"
networks:
pgsql:
external: true
redis:
external: true
traefik:
external: true
gitlab:
external: true
# mkdir -p /srv/docker/gitlab/conf
# mkdir -p /srv/docker/gitlab/logs
# mkdir -p /srv/docker/gitlab/data
# mkdir -p /srv/docker/gitlab/runner
# docker stack deploy --compose-file 08gitlab-runner.yml gitlab
Klastera un pakalpojumu galÄ«gais stÄvoklis:
# docker service ls
ID NAME MODE REPLICAS IMAGE PORTS
lef9n3m92buq etcd_etcd1 replicated 1/1 quay.io/coreos/etcd:latest
ij6uyyo792x5 etcd_etcd2 replicated 1/1 quay.io/coreos/etcd:latest
fqttqpjgp6pp etcd_etcd3 replicated 1/1 quay.io/coreos/etcd:latest
hq5iyga28w33 gitlab_gitlab replicated 1/1 gitlab/gitlab-ce:latest *:22222->22/tcp
dt7s6vs0q4qc gitlab_gitlab-runner replicated 1/1 gitlab/gitlab-runner:latest
k7uoezno0h9n pgsql_pgkeeper1 replicated 1/1 sorintlab/stolon:master-pg10
cnrwul4r4nse pgsql_pgkeeper2 replicated 1/1 sorintlab/stolon:master-pg10
frflfnpty7tr pgsql_pgkeeper3 replicated 1/1 sorintlab/stolon:master-pg10
x7pqqchi52kq pgsql_pgsentinel replicated 3/3 sorintlab/stolon:master-pg10
mwu2wl8fti4r pgsql_postgresql replicated 3/3 sorintlab/stolon:master-pg10
9hkbe2vksbzb redis_redis-master global 3/3 bitnami/redis:latest *:6379->6379/tcp
l88zn8cla7dc redis_redis-replica replicated 3/3 bitnami/redis:latest *:30003->6379/tcp
1utp309xfmsy redis_redis-sentinel global 3/3 bitnami/redis:latest *:30002->16379/tcp
oteb824ylhyp registry_registry replicated 1/1 registry:2.6
qovrah8nzzu8 traefik_traefik replicated 3/3 traefik:latest *:80->80/tcp, *:443->443/tcp
Ko vÄl var uzlabot? Noteikti konfigurÄjiet Traefik darbam ar https konteineriem, pievienojiet tls Å”ifrÄÅ”anu Postgresql un Redis. Bet vispÄr jau var iedot izstrÄdÄtÄjiem kÄ PoC. Tagad apskatÄ«sim Docker alternatÄ«vas.
Podmans
VÄl viens diezgan labi pazÄ«stams dzinÄjs konteineru darbinÄÅ”anai, kas sagrupÄti pÄc podiÅiem (pods, konteineru grupas, kas izvietotas kopÄ). AtŔķirÄ«bÄ no Docker, konteineru palaiÅ”anai nav nepiecieÅ”ams pakalpojums, viss darbs tiek veikts, izmantojot libpod bibliotÄku. RakstÄ«ts arÄ« Go, ir nepiecieÅ”ams ar OCI saderÄ«gs izpildlaiks, lai palaistu konteinerus, piemÄram, runC.
Darbs ar Podman kopumÄ lÄ«dzinÄs Docker darbam, ciktÄl jÅ«s to varat izdarÄ«t Å”Ädi (to apgalvo daudzi, kas to ir mÄÄ£inÄjuÅ”i, tostarp Ŕī raksta autors):
$ alias docker=podman
un jÅ«s varat turpinÄt strÄdÄt. KopumÄ situÄcija ar Podman ir ļoti interesanta, jo, ja Kubernetes agrÄ«nÄs versijas strÄdÄja ar Docker, tad aptuveni kopÅ” 2015. gada pÄc konteineru pasaules standartizÄcijas (OCI - Open Container Initiative) un Docker sadalÄ«Å”anas konteineros un runC, alternatÄ«va Docker tiek izstrÄdÄts, lai tas darbotos Kubernetes: CRI-O. Podman Å”ajÄ ziÅÄ ir alternatÄ«va Docker, kas veidota pÄc Kubernetes principiem, ieskaitot konteineru grupÄÅ”anu, taÄu projekta galvenais mÄrÄ·is ir darbinÄt Docker stila konteinerus bez papildu pakalpojumiem. AcÄ«mredzamu iemeslu dÄļ spieta režīma nav, jo izstrÄdÄtÄji skaidri saka, ka, ja jums ir nepiecieÅ”ams klasteris, izmantojiet Kubernetes.
UzstÄdÄ«Å”ana
Lai instalÄtu Centos 7, vienkÄrÅ”i aktivizÄjiet Extras repozitoriju un pÄc tam instalÄjiet visu ar komandu:
# yum -y install podman
Citas funkcijas
Podman var Ä£enerÄt sistÄmas sistÄmas vienÄ«bas, tÄdÄjÄdi atrisinot konteineru palaiÅ”anas problÄmu pÄc servera pÄrstartÄÅ”anas. TurklÄt tiek deklarÄts, ka systemd konteinerÄ darbojas pareizi kÄ pid 1. Lai izveidotu konteinerus, ir atseviŔķs buildah rÄ«ks, ir arÄ« treÅ”o puÅ”u rÄ«ki - docker-compose analogi, kas arÄ« Ä£enerÄ ar Kubernetes saderÄ«gus konfigurÄcijas failus, tÄpÄc pÄreja no Podman uz Kubernetes ir pÄc iespÄjas vienkÄrÅ”Äka.
Darbs ar Podmanu
TÄ kÄ spieta režīma nav (ja ir nepiecieÅ”ams klasteris paredzÄts pÄrslÄgties uz Kubernetes), mÄs to saliksim atseviŔķos konteineros.
InstalÄjiet podman-compose:
# yum -y install python3-pip
# pip3 install podman-compose
IegÅ«tais podman konfigurÄcijas fails ir nedaudz atŔķirÄ«gs, jo, piemÄram, mums bija jÄpÄrvieto atseviŔķa sÄjumu sadaļa tieÅ”i uz pakalpojumu sadaļu.
gitlab-podman.yml
version: '3.7'
services:
gitlab:
image: gitlab/gitlab-ce:latest
hostname: gitlab.example.com
restart: unless-stopped
environment:
GITLAB_OMNIBUS_CONFIG: |
gitlab_rails['gitlab_shell_ssh_port'] = 22222
ports:
- "80:80"
- "22222:22"
volumes:
- /srv/podman/gitlab/conf:/etc/gitlab
- /srv/podman/gitlab/data:/var/opt/gitlab
- /srv/podman/gitlab/logs:/var/log/gitlab
networks:
- gitlab
gitlab-runner:
image: gitlab/gitlab-runner:alpine
restart: unless-stopped
depends_on:
- gitlab
volumes:
- /srv/podman/gitlab/runner:/etc/gitlab-runner
- /var/run/docker.sock:/var/run/docker.sock
networks:
- gitlab
networks:
gitlab:
# podman-compose -f gitlab-runner.yml -d up
Darba rezultÄts:
# podman ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
da53da946c01 docker.io/gitlab/gitlab-runner:alpine run --user=gitlab... About a minute ago Up About a minute ago 0.0.0.0:22222->22/tcp, 0.0.0.0:80->80/tcp root_gitlab-runner_1
781c0103c94a docker.io/gitlab/gitlab-ce:latest /assets/wrapper About a minute ago Up About a minute ago 0.0.0.0:22222->22/tcp, 0.0.0.0:80->80/tcp root_gitlab_1
ApskatÄ«sim, ko tas Ä£enerÄs systemd un kubernetes, Å”im nolÅ«kam mums ir jÄnoskaidro podnieka nosaukums vai ID:
# podman pod ls
POD ID NAME STATUS CREATED # OF CONTAINERS INFRA ID
71fc2b2a5c63 root Running 11 minutes ago 3 db40ab8bf84b
Kubernetes:
# podman generate kube 71fc2b2a5c63
# Generation of Kubernetes YAML is still under development!
#
# Save the output of this file and use kubectl create -f to import
# it into Kubernetes.
#
# Created with podman-1.6.4
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: "2020-07-29T19:22:40Z"
labels:
app: root
name: root
spec:
containers:
- command:
- /assets/wrapper
env:
- name: PATH
value: /opt/gitlab/embedded/bin:/opt/gitlab/bin:/assets:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
- name: TERM
value: xterm
- name: HOSTNAME
value: gitlab.example.com
- name: container
value: podman
- name: GITLAB_OMNIBUS_CONFIG
value: |
gitlab_rails['gitlab_shell_ssh_port'] = 22222
- name: LANG
value: C.UTF-8
image: docker.io/gitlab/gitlab-ce:latest
name: rootgitlab1
ports:
- containerPort: 22
hostPort: 22222
protocol: TCP
- containerPort: 80
hostPort: 80
protocol: TCP
resources: {}
securityContext:
allowPrivilegeEscalation: true
capabilities: {}
privileged: false
readOnlyRootFilesystem: false
volumeMounts:
- mountPath: /var/opt/gitlab
name: srv-podman-gitlab-data
- mountPath: /var/log/gitlab
name: srv-podman-gitlab-logs
- mountPath: /etc/gitlab
name: srv-podman-gitlab-conf
workingDir: /
- command:
- run
- --user=gitlab-runner
- --working-directory=/home/gitlab-runner
env:
- name: PATH
value: /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
- name: TERM
value: xterm
- name: HOSTNAME
- name: container
value: podman
image: docker.io/gitlab/gitlab-runner:alpine
name: rootgitlab-runner1
resources: {}
securityContext:
allowPrivilegeEscalation: true
capabilities: {}
privileged: false
readOnlyRootFilesystem: false
volumeMounts:
- mountPath: /etc/gitlab-runner
name: srv-podman-gitlab-runner
- mountPath: /var/run/docker.sock
name: var-run-docker.sock
workingDir: /
volumes:
- hostPath:
path: /srv/podman/gitlab/runner
type: Directory
name: srv-podman-gitlab-runner
- hostPath:
path: /var/run/docker.sock
type: File
name: var-run-docker.sock
- hostPath:
path: /srv/podman/gitlab/data
type: Directory
name: srv-podman-gitlab-data
- hostPath:
path: /srv/podman/gitlab/logs
type: Directory
name: srv-podman-gitlab-logs
- hostPath:
path: /srv/podman/gitlab/conf
type: Directory
name: srv-podman-gitlab-conf
status: {}
sistÄma:
# podman generate systemd 71fc2b2a5c63
# pod-71fc2b2a5c6346f0c1c86a2dc45dbe78fa192ea02aac001eb8347ccb8c043c26.service
# autogenerated by Podman 1.6.4
# Thu Jul 29 15:23:28 EDT 2020
[Unit]
Description=Podman pod-71fc2b2a5c6346f0c1c86a2dc45dbe78fa192ea02aac001eb8347ccb8c043c26.service
Documentation=man:podman-generate-systemd(1)
Requires=container-781c0103c94aaa113c17c58d05ddabf8df4bf39707b664abcf17ed2ceff467d3.service container-da53da946c01449f500aa5296d9ea6376f751948b17ca164df438b7df6607864.service
Before=container-781c0103c94aaa113c17c58d05ddabf8df4bf39707b664abcf17ed2ceff467d3.service container-da53da946c01449f500aa5296d9ea6376f751948b17ca164df438b7df6607864.service
[Service]
Restart=on-failure
ExecStart=/usr/bin/podman start db40ab8bf84bf35141159c26cb6e256b889c7a98c0418eee3c4aa683c14fccaa
ExecStop=/usr/bin/podman stop -t 10 db40ab8bf84bf35141159c26cb6e256b889c7a98c0418eee3c4aa683c14fccaa
KillMode=none
Type=forking
PIDFile=/var/run/containers/storage/overlay-containers/db40ab8bf84bf35141159c26cb6e256b889c7a98c0418eee3c4aa683c14fccaa/userdata/conmon.pid
[Install]
WantedBy=multi-user.target
# container-da53da946c01449f500aa5296d9ea6376f751948b17ca164df438b7df6607864.service
# autogenerated by Podman 1.6.4
# Thu Jul 29 15:23:28 EDT 2020
[Unit]
Description=Podman container-da53da946c01449f500aa5296d9ea6376f751948b17ca164df438b7df6607864.service
Documentation=man:podman-generate-systemd(1)
RefuseManualStart=yes
RefuseManualStop=yes
BindsTo=pod-71fc2b2a5c6346f0c1c86a2dc45dbe78fa192ea02aac001eb8347ccb8c043c26.service
After=pod-71fc2b2a5c6346f0c1c86a2dc45dbe78fa192ea02aac001eb8347ccb8c043c26.service
[Service]
Restart=on-failure
ExecStart=/usr/bin/podman start da53da946c01449f500aa5296d9ea6376f751948b17ca164df438b7df6607864
ExecStop=/usr/bin/podman stop -t 10 da53da946c01449f500aa5296d9ea6376f751948b17ca164df438b7df6607864
KillMode=none
Type=forking
PIDFile=/var/run/containers/storage/overlay-containers/da53da946c01449f500aa5296d9ea6376f751948b17ca164df438b7df6607864/userdata/conmon.pid
[Install]
WantedBy=multi-user.target
# container-781c0103c94aaa113c17c58d05ddabf8df4bf39707b664abcf17ed2ceff467d3.service
# autogenerated by Podman 1.6.4
# Thu Jul 29 15:23:28 EDT 2020
[Unit]
Description=Podman container-781c0103c94aaa113c17c58d05ddabf8df4bf39707b664abcf17ed2ceff467d3.service
Documentation=man:podman-generate-systemd(1)
RefuseManualStart=yes
RefuseManualStop=yes
BindsTo=pod-71fc2b2a5c6346f0c1c86a2dc45dbe78fa192ea02aac001eb8347ccb8c043c26.service
After=pod-71fc2b2a5c6346f0c1c86a2dc45dbe78fa192ea02aac001eb8347ccb8c043c26.service
[Service]
Restart=on-failure
ExecStart=/usr/bin/podman start 781c0103c94aaa113c17c58d05ddabf8df4bf39707b664abcf17ed2ceff467d3
ExecStop=/usr/bin/podman stop -t 10 781c0103c94aaa113c17c58d05ddabf8df4bf39707b664abcf17ed2ceff467d3
KillMode=none
Type=forking
PIDFile=/var/run/containers/storage/overlay-containers/781c0103c94aaa113c17c58d05ddabf8df4bf39707b664abcf17ed2ceff467d3/userdata/conmon.pid
[Install]
WantedBy=multi-user.target
DiemžÄl, izÅemot konteineru palaiÅ”anu, Ä£enerÄtÄ vienÄ«ba systemd nedara neko citu (piemÄram, iztÄ«ra vecos konteinerus, kad Å”Äds pakalpojums tiek restartÄts), tÄpÄc Å”Ädas lietas jums bÅ«s jÄpievieno paÅ”am.
PrincipÄ pietiek ar Podman, lai pamÄÄ£inÄtu, kas ir konteineri, pÄrsÅ«tÄ«tu vecÄs konfigurÄcijas docker-compose un pÄc tam dotos uz Kubernetes, ja nepiecieÅ”ams, klasterÄ« vai iegÅ«tu vieglÄk lietojamu alternatÄ«vu Docker.
rkt
Projekts
Zibspuldze
VairÄk
Atzinumi
SituÄcija ar Kubernetes ir ļoti interesanta: no vienas puses, ar Docker var salikt klasteru (swarm režīmÄ), ar kuru var pat palaist ražoÅ”anas vides klientiem, Ä«paÅ”i tas attiecas uz mazÄm komandÄm (3-5 cilvÄki). ), vai ar nelielu kopÄjo slodzi, vai arÄ« trÅ«kst vÄlmes izprast Kubernetes iestatÄ«Å”anas sarežģījumus, tostarp lielÄm slodzÄm.
Podman nenodroÅ”ina pilnÄ«gu savietojamÄ«bu, taÄu tam ir viena svarÄ«ga priekÅ”rocÄ«ba - saderÄ«ba ar Kubernetes, ieskaitot papildu rÄ«kus (buildah un citus). TÄpÄc darba rÄ«ka izvÄlei es Ä·erÅ”os klÄt Å”Ädi: mazÄm komandÄm vai ar ierobežotu budžetu - Docker (ar iespÄjamu spieta režīmu), attÄ«stÄ«Å”anai sev personÄ«gÄ lokÄlÄ saimniekdatorÄ - Podman biedri un visiem pÄrÄjiem - Kubernetes.
Neesmu pÄrliecinÄts, ka ar Docker situÄcija nÄkotnÄ nemainÄ«sies, galu galÄ viÅi ir pionieri, un arÄ« lÄnÄm soli pa solim standartizÄjas, bet Podman ar visiem trÅ«kumiem (darbojas tikai uz Linux, nav klasterÄÅ”anas, montÄžas un citas darbÄ«bas ir treÅ”o puÅ”u lÄmumi) nÄkotne ir skaidrÄka, tÄpÄc aicinu ikvienu apspriest Å”os secinÄjumus komentÄros.
PS 3. augustÄ mÄs uzsÄkam "
IepriekŔpasūtīŔanas cena pirms izlaiŔanas: RUB 5000. Varat apskatīt programmu Docker Video Course
Avots: www.habr.com