Docker na wote, wote, wote

TL;DR: Makala ya muhtasari - mwongozo wa kulinganisha mazingira ya kuendesha programu katika vyombo. Uwezekano wa Docker na mifumo mingine kama hiyo itazingatiwa.

Docker na wote, wote, wote

Historia kidogo ya wapi yote yalitoka

Hadithi

Njia ya kwanza inayojulikana ya kutenganisha programu ni chroot. Simu ya mfumo wa jina moja hutoa mabadiliko kwenye saraka ya mizizi - na hivyo kutoa ufikiaji wa programu iliyoiita, ufikiaji wa faili tu ndani ya saraka hii. Lakini ikiwa programu imepewa haki za mtumiaji mkuu ndani, inaweza uwezekano wa "kutoroka" kutoka kwa chroot na kupata ufikiaji wa mfumo mkuu wa uendeshaji. Pia, pamoja na kubadilisha saraka ya mizizi, rasilimali nyingine (RAM, processor), pamoja na upatikanaji wa mtandao, sio mdogo.

Njia inayofuata ni kuzindua mfumo kamili wa uendeshaji ndani ya chombo, kwa kutumia mifumo ya kernel ya mfumo wa uendeshaji. Njia hii inaitwa tofauti katika mifumo tofauti ya uendeshaji, lakini kiini ni sawa - kuendesha mifumo kadhaa ya uendeshaji huru, ambayo kila mmoja huendesha kwenye kernel moja inayoendesha mfumo mkuu wa uendeshaji. Hii ni pamoja na Jela za FreeBSD, Kanda za Solaris, OpenVZ, na LXC za Linux. Kutengwa hutolewa sio tu kwa nafasi ya disk, lakini pia kwa rasilimali nyingine, hasa, kila chombo kinaweza kuwa na vikwazo kwa muda wa processor, RAM, bandwidth ya mtandao. Ikilinganishwa na chroot, kuacha chombo ni ngumu zaidi, kwa kuwa mtumiaji mkuu kwenye chombo anapata tu ndani ya chombo, hata hivyo, kutokana na haja ya kuweka mfumo wa uendeshaji ndani ya chombo hadi sasa na matumizi ya kernel ya zamani. matoleo (yanafaa kwa Linux, kwa kiasi kidogo FreeBSD), kuna uwezekano usio na sufuri wa kuvunja mfumo wa kutengwa kwa kernel na kupata ufikiaji wa mfumo mkuu wa uendeshaji.

Badala ya kuzindua mfumo kamili wa uendeshaji kwenye chombo (na mfumo wa uanzishaji, meneja wa kifurushi, nk), programu zinaweza kuzinduliwa mara moja, jambo kuu ni kutoa programu na fursa hii (uwepo wa maktaba muhimu na faili zingine). Wazo hili lilitumika kama msingi wa uboreshaji wa programu iliyo na kontena, mwakilishi mashuhuri na anayejulikana ambaye ni Docker. Ikilinganishwa na mifumo ya awali, mifumo rahisi zaidi ya kutengwa, pamoja na usaidizi uliojengwa ndani wa mitandao dhahania kati ya vyombo na hali ya matumizi ndani ya kontena, ilisababisha uwezo wa kujenga mazingira moja kamili kutoka kwa idadi kubwa ya seva zinazoendesha vyombo - bila. hitaji la usimamizi wa rasilimali kwa mikono.

Docker

Docker ndio programu inayojulikana zaidi ya uwekaji vyombo. Imeandikwa katika lugha ya Go, hutumia uwezo wa kawaida wa kernel ya Linux - makundi, nafasi za majina, uwezo, nk, pamoja na mifumo ya faili ya Aufs na wengine sawa ili kuhifadhi nafasi ya disk.

Docker na wote, wote, wote
Chanzo: wikimedia

usanifu

Kabla ya toleo la 1.11, Docker ilifanya kazi kama huduma moja ambayo ilifanya shughuli zote na vyombo: kupakua picha za vyombo, kuzindua vyombo, kuchakata maombi ya API. Tangu toleo la 1.11, Docker imegawanywa katika sehemu kadhaa zinazoingiliana: zilizowekwa, kushughulikia mzunguko mzima wa maisha ya vyombo (mgao wa nafasi ya diski, kupakua picha, mitandao, kuzindua, kusakinisha na kufuatilia hali ya vyombo) na runC. , nyakati za kukimbia za kontena, kulingana na matumizi ya vikundi na vipengele vingine vya kernel ya Linux. Huduma ya docker yenyewe inabaki, lakini sasa inatumika tu kuchakata maombi ya API kutangazwa kwa kontena.

Docker na wote, wote, wote

Ufungaji na usanidi

Njia yangu ya kupenda ya kufunga docker ni mashine ya docker, ambayo, pamoja na kufunga moja kwa moja na kusanidi docker kwenye seva za mbali (ikiwa ni pamoja na mawingu mbalimbali), inakuwezesha kufanya kazi na mifumo ya faili ya seva za mbali, na pia inaweza kuendesha amri mbalimbali.

Walakini, tangu 2018, mradi haujatengenezwa, kwa hivyo tutaisakinisha kwa njia ya kawaida kwa usambazaji mwingi wa Linux - kwa kuongeza hazina na kusanikisha vifurushi muhimu.

Njia hii pia hutumiwa kwa ajili ya ufungaji wa automatiska, kwa mfano, kwa kutumia Ansible au mifumo mingine inayofanana, lakini sitaizingatia katika makala hii.

Ufungaji utafanywa kwenye Centos 7, nitatumia mashine ya kawaida kama seva, kusanikisha, endesha tu maagizo hapa chini:

# yum install -y yum-utils
# yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
# yum install docker-ce docker-ce-cli containerd.io

Baada ya usakinishaji, unahitaji kuanza huduma, kuiweka katika upakiaji otomatiki:

# systemctl enable docker
# systemctl start docker
# firewall-cmd --zone=public --add-port=2377/tcp --permanent

Kwa kuongeza, unaweza kuunda kikundi cha docker, ambacho watumiaji wake wataweza kufanya kazi na docker bila sudo, kuanzisha ukataji miti, kuwezesha upatikanaji wa API kutoka nje, usisahau kurekebisha firewall (kila kitu ambacho hakiruhusiwi ni. marufuku katika mifano hapo juu na chini - niliacha hii kwa unyenyekevu na taswira), lakini sitaingia kwa undani zaidi hapa.

Vipengele vingine

Kwa kuongezea mashine ya kizimbani hapo juu, pia kuna rejista ya kizimbani, zana ya kuhifadhi picha za kontena, na vile vile utunzi wa kizimbani - zana ya kusambaza otomatiki ya utumaji wa programu kwenye vyombo, faili za YAML hutumiwa kujenga na kusanidi vyombo na. mambo mengine yanayohusiana (kwa mfano, mitandao, mifumo ya faili inayoendelea ya kuhifadhi data).

Inaweza pia kutumika kupanga mabomba kwa CICD. Kipengele kingine cha kufurahisha ni kufanya kazi katika hali ya nguzo, kinachojulikana kama modi ya kundi (kabla ya toleo la 1.12 ilijulikana kama kundi la docker), ambayo hukuruhusu kukusanya miundombinu moja kutoka kwa seva kadhaa ili kuendesha vyombo. Kuna msaada kwa mtandao wa kawaida juu ya seva zote, kuna usawazishaji wa mzigo uliojengwa, pamoja na usaidizi wa siri za vyombo.

Faili za YAML kutoka kwa utunzi wa docker zinaweza kutumika kwa nguzo kama hizo zilizo na marekebisho madogo, zikiendesha kiotomatiki matengenezo ya vikundi vidogo na vya kati kwa madhumuni mbalimbali. Kwa makundi makubwa, Kubernetes inafaa zaidi kwa sababu gharama za matengenezo ya hali ya kundi zinaweza kuwa kubwa kuliko zile za Kubernetes. Mbali na runC, kama mazingira ya utekelezaji wa vyombo, unaweza kufunga, kwa mfano Vyombo vya Kata

Kufanya kazi na Docker

Baada ya usakinishaji na usanidi, tutajaribu kuunda nguzo ambayo tutatumia Usajili wa GitLab na Docker kwa timu ya maendeleo. Kama seva, nitatumia mashine tatu za kawaida, ambazo nitasambaza FS iliyosambazwa ya GlusterFS, nitatumia kama hifadhi ya kiasi cha docker, kwa mfano, kuendesha toleo lisilo salama la usajili wa docker. Vipengee muhimu vya kuendesha: Usajili wa Docker, Postgresql, Redis, GitLab na usaidizi wa GitLab Runner juu ya Swarm. Postgresql itazinduliwa na nguzo Stolon, kwa hivyo hauitaji kutumia GlusterFS kuhifadhi data ya Postgresql. Data iliyosalia muhimu itahifadhiwa kwenye GlusterFS.

Ili kupeleka GlusterFS kwenye seva zote (zinaitwa node1, node2, node3), unahitaji kusakinisha vifurushi, kuwezesha firewall, kuunda saraka muhimu:

# yum -y install centos-release-gluster7
# yum -y install glusterfs-server
# systemctl enable glusterd
# systemctl start glusterd
# firewall-cmd --add-service=glusterfs --permanent
# firewall-cmd --reload
# mkdir -p /srv/gluster
# mkdir -p /srv/docker
# echo "$(hostname):/docker /srv/docker glusterfs defaults,_netdev 0 0" >> /etc/fstab

Baada ya usakinishaji, kazi ya kusanidi GlusterFS lazima iendelezwe kutoka kwa nodi moja, kwa mfano node1:

# gluster peer probe node2
# gluster peer probe node3
# gluster volume create docker replica 3 node1:/srv/gluster node2:/srv/gluster node3:/srv/gluster force
# gluster volume start docker

Kisha unahitaji kuweka kiasi kinachosababisha (amri lazima iendeshwe kwenye seva zote):

# mount /srv/docker

Njia ya swarm imeundwa kwenye seva moja, ambayo itakuwa Kiongozi, wengine watalazimika kujiunga na nguzo, kwa hivyo matokeo ya kuendesha amri kwenye seva ya kwanza itahitaji kunakiliwa na kutekelezwa kwa wengine.

Usanidi wa nguzo ya awali, ninaendesha amri kwenye node1:

# docker swarm init
Swarm initialized: current node (a5jpfrh5uvo7svzz1ajduokyq) is now a manager.

To add a worker to this swarm, run the following command:

    docker swarm join --token SWMTKN-1-0c5mf7mvzc7o7vjk0wngno2dy70xs95tovfxbv4tqt9280toku-863hyosdlzvd76trfptd4xnzd xx.xx.xx.xx:2377

To add a manager to this swarm, run 'docker swarm join-token manager' and follow the instructions.
# docker swarm join-token manager

Nakili matokeo ya amri ya pili, tekeleza kwenye node2 na node3:

# docker swarm join --token SWMTKN-x-xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx-xxxxxxxxx xx.xx.xx.xx:2377
This node joined a swarm as a manager.

Hii inakamilisha usanidi wa awali wa seva, wacha tuanze kusanidi huduma, amri za kutekelezwa zitazinduliwa kutoka node1, isipokuwa ikiwa imeainishwa vinginevyo.

Kwanza kabisa, wacha tuunde mitandao ya vyombo:

# docker network create --driver=overlay etcd
# docker network create --driver=overlay pgsql
# docker network create --driver=overlay redis
# docker network create --driver=overlay traefik
# docker network create --driver=overlay gitlab

Kisha tunaweka alama kwenye seva, hii ni muhimu kufunga huduma zingine kwa seva:

# docker node update --label-add nodename=node1 node1
# docker node update --label-add nodename=node2 node2
# docker node update --label-add nodename=node3 node3

Kisha, tunaunda saraka za kuhifadhi data etcd, hifadhi ya KV ambayo Traefik na Stolon wanahitaji. Sawa na Postgresql, hizi zitakuwa vyombo vilivyofungwa kwa seva, kwa hivyo tunatoa amri hii kwenye seva zote:

# mkdir -p /srv/etcd

Ifuatayo, tengeneza faili ili kusanidi etcd na uitumie:

00etcd.yml

version: '3.7'

services:
  etcd1:
    image: quay.io/coreos/etcd:latest
    hostname: etcd1
    command:
      - etcd
      - --name=etcd1
      - --data-dir=/data.etcd
      - --advertise-client-urls=http://etcd1:2379
      - --listen-client-urls=http://0.0.0.0:2379
      - --initial-advertise-peer-urls=http://etcd1:2380
      - --listen-peer-urls=http://0.0.0.0:2380
      - --initial-cluster=etcd1=http://etcd1:2380,etcd2=http://etcd2:2380,etcd3=http://etcd3:2380
      - --initial-cluster-state=new
      - --initial-cluster-token=etcd-cluster
    networks:
      - etcd
    volumes:
      - etcd1vol:/data.etcd
    deploy:
      replicas: 1
      placement:
        constraints: [node.labels.nodename == node1]
  etcd2:
    image: quay.io/coreos/etcd:latest
    hostname: etcd2
    command:
      - etcd
      - --name=etcd2
      - --data-dir=/data.etcd
      - --advertise-client-urls=http://etcd2:2379
      - --listen-client-urls=http://0.0.0.0:2379
      - --initial-advertise-peer-urls=http://etcd2:2380
      - --listen-peer-urls=http://0.0.0.0:2380
      - --initial-cluster=etcd1=http://etcd1:2380,etcd2=http://etcd2:2380,etcd3=http://etcd3:2380
      - --initial-cluster-state=new
      - --initial-cluster-token=etcd-cluster
    networks:
      - etcd
    volumes:
      - etcd2vol:/data.etcd
    deploy:
      replicas: 1
      placement:
        constraints: [node.labels.nodename == node2]
  etcd3:
    image: quay.io/coreos/etcd:latest
    hostname: etcd3
    command:
      - etcd
      - --name=etcd3
      - --data-dir=/data.etcd
      - --advertise-client-urls=http://etcd3:2379
      - --listen-client-urls=http://0.0.0.0:2379
      - --initial-advertise-peer-urls=http://etcd3:2380
      - --listen-peer-urls=http://0.0.0.0:2380
      - --initial-cluster=etcd1=http://etcd1:2380,etcd2=http://etcd2:2380,etcd3=http://etcd3:2380
      - --initial-cluster-state=new
      - --initial-cluster-token=etcd-cluster
    networks:
      - etcd
    volumes:
      - etcd3vol:/data.etcd
    deploy:
      replicas: 1
      placement:
        constraints: [node.labels.nodename == node3]

volumes:
  etcd1vol:
    driver: local
    driver_opts:
      type: none
      o: bind
      device: "/srv/etcd"
  etcd2vol:
    driver: local
    driver_opts:
      type: none
      o: bind
      device: "/srv/etcd"
  etcd3vol:
    driver: local
    driver_opts:
      type: none
      o: bind
      device: "/srv/etcd"

networks:
  etcd:
    external: true

# docker stack deploy --compose-file 00etcd.yml etcd

Baada ya muda, tunaangalia kuwa nguzo ya etcd imeongezeka:

# docker exec $(docker ps | awk '/etcd/ {print $1}')  etcdctl member list
ade526d28b1f92f7: name=etcd1 peerURLs=http://etcd1:2380 clientURLs=http://etcd1:2379 isLeader=false
bd388e7810915853: name=etcd3 peerURLs=http://etcd3:2380 clientURLs=http://etcd3:2379 isLeader=false
d282ac2ce600c1ce: name=etcd2 peerURLs=http://etcd2:2380 clientURLs=http://etcd2:2379 isLeader=true
# docker exec $(docker ps | awk '/etcd/ {print $1}')  etcdctl cluster-health
member ade526d28b1f92f7 is healthy: got healthy result from http://etcd1:2379
member bd388e7810915853 is healthy: got healthy result from http://etcd3:2379
member d282ac2ce600c1ce is healthy: got healthy result from http://etcd2:2379
cluster is healthy

Unda saraka za Postgresql, toa amri kwenye seva zote:

# mkdir -p /srv/pgsql

Ifuatayo, unda faili ili kusanidi Postgresql:

01pgsql.yml

version: '3.7'

services:
  pgsentinel:
    image: sorintlab/stolon:master-pg10
    command:
      - gosu
      - stolon
      - stolon-sentinel
      - --cluster-name=stolon-cluster
      - --store-backend=etcdv3
      - --store-endpoints=http://etcd1:2379,http://etcd2:2379,http://etcd3:2379
      - --log-level=debug
    networks:
      - etcd
      - pgsql
    deploy:
      replicas: 3
      update_config:
        parallelism: 1
        delay: 30s
        order: stop-first
        failure_action: pause
  pgkeeper1:
    image: sorintlab/stolon:master-pg10
    hostname: pgkeeper1
    command:
      - gosu
      - stolon
      - stolon-keeper
      - --pg-listen-address=pgkeeper1
      - --pg-repl-username=replica
      - --uid=pgkeeper1
      - --pg-su-username=postgres
      - --pg-su-passwordfile=/run/secrets/pgsql
      - --pg-repl-passwordfile=/run/secrets/pgsql_repl
      - --data-dir=/var/lib/postgresql/data
      - --cluster-name=stolon-cluster
      - --store-backend=etcdv3
      - --store-endpoints=http://etcd1:2379,http://etcd2:2379,http://etcd3:2379
    networks:
      - etcd
      - pgsql
    environment:
      - PGDATA=/var/lib/postgresql/data
    volumes:
      - pgkeeper1:/var/lib/postgresql/data
    secrets:
      - pgsql
      - pgsql_repl
    deploy:
      replicas: 1
      placement:
        constraints: [node.labels.nodename == node1]
  pgkeeper2:
    image: sorintlab/stolon:master-pg10
    hostname: pgkeeper2
    command:
      - gosu
      - stolon 
      - stolon-keeper
      - --pg-listen-address=pgkeeper2
      - --pg-repl-username=replica
      - --uid=pgkeeper2
      - --pg-su-username=postgres
      - --pg-su-passwordfile=/run/secrets/pgsql
      - --pg-repl-passwordfile=/run/secrets/pgsql_repl
      - --data-dir=/var/lib/postgresql/data
      - --cluster-name=stolon-cluster
      - --store-backend=etcdv3
      - --store-endpoints=http://etcd1:2379,http://etcd2:2379,http://etcd3:2379
    networks:
      - etcd
      - pgsql
    environment:
      - PGDATA=/var/lib/postgresql/data
    volumes:
      - pgkeeper2:/var/lib/postgresql/data
    secrets:
      - pgsql
      - pgsql_repl
    deploy:
      replicas: 1
      placement:
        constraints: [node.labels.nodename == node2]
  pgkeeper3:
    image: sorintlab/stolon:master-pg10
    hostname: pgkeeper3
    command:
      - gosu
      - stolon 
      - stolon-keeper
      - --pg-listen-address=pgkeeper3
      - --pg-repl-username=replica
      - --uid=pgkeeper3
      - --pg-su-username=postgres
      - --pg-su-passwordfile=/run/secrets/pgsql
      - --pg-repl-passwordfile=/run/secrets/pgsql_repl
      - --data-dir=/var/lib/postgresql/data
      - --cluster-name=stolon-cluster
      - --store-backend=etcdv3
      - --store-endpoints=http://etcd1:2379,http://etcd2:2379,http://etcd3:2379
    networks:
      - etcd
      - pgsql
    environment:
      - PGDATA=/var/lib/postgresql/data
    volumes:
      - pgkeeper3:/var/lib/postgresql/data
    secrets:
      - pgsql
      - pgsql_repl
    deploy:
      replicas: 1
      placement:
        constraints: [node.labels.nodename == node3]
  postgresql:
    image: sorintlab/stolon:master-pg10
    command: gosu stolon stolon-proxy --listen-address 0.0.0.0 --cluster-name stolon-cluster --store-backend=etcdv3 --store-endpoints http://etcd1:2379,http://etcd2:2379,http://etcd3:2379
    networks:
      - etcd
      - pgsql
    deploy:
      replicas: 3
      update_config:
        parallelism: 1
        delay: 30s
        order: stop-first
        failure_action: rollback

volumes:
  pgkeeper1:
    driver: local
    driver_opts:
      type: none
      o: bind
      device: "/srv/pgsql"
  pgkeeper2:
    driver: local
    driver_opts:
      type: none
      o: bind
      device: "/srv/pgsql"
  pgkeeper3:
    driver: local
    driver_opts:
      type: none
      o: bind
      device: "/srv/pgsql"

secrets:
  pgsql:
    file: "/srv/docker/postgres"
  pgsql_repl:
    file: "/srv/docker/replica"

networks:
  etcd:
    external: true
  pgsql:
    external: true

Tunatengeneza siri, tumia faili:

# </dev/urandom tr -dc 234567890qwertyuopasdfghjkzxcvbnmQWERTYUPASDFGHKLZXCVBNM | head -c $(((RANDOM%3)+15)) > /srv/docker/replica
# </dev/urandom tr -dc 234567890qwertyuopasdfghjkzxcvbnmQWERTYUPASDFGHKLZXCVBNM | head -c $(((RANDOM%3)+15)) > /srv/docker/postgres
# docker stack deploy --compose-file 01pgsql.yml pgsql

Muda fulani baadaye (angalia matokeo ya amri huduma ya docker lskwamba huduma zote zimeongezeka) anzisha nguzo ya Postgresql:

# docker exec $(docker ps | awk '/pgkeeper/ {print $1}') stolonctl --cluster-name=stolon-cluster --store-backend=etcdv3 --store-endpoints=http://etcd1:2379,http://etcd2:2379,http://etcd3:2379 init

Kuangalia utayari wa nguzo ya Postgresql:

# docker exec $(docker ps | awk '/pgkeeper/ {print $1}') stolonctl --cluster-name=stolon-cluster --store-backend=etcdv3 --store-endpoints=http://etcd1:2379,http://etcd2:2379,http://etcd3:2379 status
=== Active sentinels ===

ID      LEADER
26baa11d    false
74e98768    false
a8cb002b    true

=== Active proxies ===

ID
4d233826
9f562f3b
b0c79ff1

=== Keepers ===

UID     HEALTHY PG LISTENADDRESS    PG HEALTHY  PG WANTEDGENERATION PG CURRENTGENERATION
pgkeeper1   true    pgkeeper1:5432         true     2           2
pgkeeper2   true    pgkeeper2:5432          true            2                   2
pgkeeper3   true    pgkeeper3:5432          true            3                   3

=== Cluster Info ===

Master Keeper: pgkeeper3

===== Keepers/DB tree =====

pgkeeper3 (master)
β”œβ”€pgkeeper2
└─pgkeeper1

Tunasanidi traefik ili kufungua ufikiaji wa kontena kutoka nje:

03traefik.yml

version: '3.7'

services:
  traefik:
    image: traefik:latest
    command: >
      --log.level=INFO
      --providers.docker=true
      --entryPoints.web.address=:80
      --providers.providersThrottleDuration=2
      --providers.docker.watch=true
      --providers.docker.swarmMode=true
      --providers.docker.swarmModeRefreshSeconds=15s
      --providers.docker.exposedbydefault=false
      --accessLog.bufferingSize=0
      --api=true
      --api.dashboard=true
      --api.insecure=true
    networks:
      - traefik
    ports:
      - 80:80
    volumes:
      - /var/run/docker.sock:/var/run/docker.sock
    deploy:
      replicas: 3
      placement:
        constraints:
          - node.role == manager
        preferences:
          - spread: node.id
      labels:
        - traefik.enable=true
        - traefik.http.routers.traefik.rule=Host(`traefik.example.com`)
        - traefik.http.services.traefik.loadbalancer.server.port=8080
        - traefik.docker.network=traefik

networks:
  traefik:
    external: true

# docker stack deploy --compose-file 03traefik.yml traefik

Tunaanza Nguzo ya Redis, kwa hili tunaunda saraka ya uhifadhi kwenye nodi zote:

# mkdir -p /srv/redis

05redis.yml

version: '3.7'

services:
  redis-master:
    image: 'bitnami/redis:latest'
    networks:
      - redis
    ports:
      - '6379:6379'
    environment:
      - REDIS_REPLICATION_MODE=master
      - REDIS_PASSWORD=xxxxxxxxxxx
    deploy:
      mode: global
      restart_policy:
        condition: any
    volumes:
      - 'redis:/opt/bitnami/redis/etc/'

  redis-replica:
    image: 'bitnami/redis:latest'
    networks:
      - redis
    ports:
      - '6379'
    depends_on:
      - redis-master
    environment:
      - REDIS_REPLICATION_MODE=slave
      - REDIS_MASTER_HOST=redis-master
      - REDIS_MASTER_PORT_NUMBER=6379
      - REDIS_MASTER_PASSWORD=xxxxxxxxxxx
      - REDIS_PASSWORD=xxxxxxxxxxx
    deploy:
      mode: replicated
      replicas: 3
      update_config:
        parallelism: 1
        delay: 10s
      restart_policy:
        condition: any

  redis-sentinel:
    image: 'bitnami/redis:latest'
    networks:
      - redis
    ports:
      - '16379'
    depends_on:
      - redis-master
      - redis-replica
    entrypoint: |
      bash -c 'bash -s <<EOF
      "/bin/bash" -c "cat <<EOF > /opt/bitnami/redis/etc/sentinel.conf
      port 16379
      dir /tmp
      sentinel monitor master-node redis-master 6379 2
      sentinel down-after-milliseconds master-node 5000
      sentinel parallel-syncs master-node 1
      sentinel failover-timeout master-node 5000
      sentinel auth-pass master-node xxxxxxxxxxx
      sentinel announce-ip redis-sentinel
      sentinel announce-port 16379
      EOF"
      "/bin/bash" -c "redis-sentinel /opt/bitnami/redis/etc/sentinel.conf"
      EOF'
    deploy:
      mode: global
      restart_policy:
        condition: any

volumes:
  redis:
    driver: local
    driver_opts:
      type: 'none'
      o: 'bind'
      device: "/srv/redis"

networks:
  redis:
    external: true

# docker stack deploy --compose-file 05redis.yml redis

Ongeza Usajili wa Docker:

06rejista.yml

version: '3.7'

services:
  registry:
    image: registry:2.6
    networks:
      - traefik
    volumes:
      - registry_data:/var/lib/registry
    deploy:
      replicas: 1
      placement:
        constraints: [node.role == manager]
      restart_policy:
        condition: on-failure
      labels:
        - traefik.enable=true
        - traefik.http.routers.registry.rule=Host(`registry.example.com`)
        - traefik.http.services.registry.loadbalancer.server.port=5000
        - traefik.docker.network=traefik

volumes:
  registry_data:
    driver: local
    driver_opts:
      type: none
      o: bind
      device: "/srv/docker/registry"

networks:
  traefik:
    external: true

# mkdir /srv/docker/registry
# docker stack deploy --compose-file 06registry.yml registry

Na mwishowe - GitLab:

08gitlab-runner.yml

version: '3.7'

services:
  gitlab:
    image: gitlab/gitlab-ce:latest
    networks:
      - pgsql
      - redis
      - traefik
      - gitlab
    ports:
      - 22222:22
    environment:
      GITLAB_OMNIBUS_CONFIG: |
        postgresql['enable'] = false
        redis['enable'] = false
        gitlab_rails['registry_enabled'] = false
        gitlab_rails['db_username'] = "gitlab"
        gitlab_rails['db_password'] = "XXXXXXXXXXX"
        gitlab_rails['db_host'] = "postgresql"
        gitlab_rails['db_port'] = "5432"
        gitlab_rails['db_database'] = "gitlab"
        gitlab_rails['db_adapter'] = 'postgresql'
        gitlab_rails['db_encoding'] = 'utf8'
        gitlab_rails['redis_host'] = 'redis-master'
        gitlab_rails['redis_port'] = '6379'
        gitlab_rails['redis_password'] = 'xxxxxxxxxxx'
        gitlab_rails['smtp_enable'] = true
        gitlab_rails['smtp_address'] = "smtp.yandex.ru"
        gitlab_rails['smtp_port'] = 465
        gitlab_rails['smtp_user_name'] = "[email protected]"
        gitlab_rails['smtp_password'] = "xxxxxxxxx"
        gitlab_rails['smtp_domain'] = "example.com"
        gitlab_rails['gitlab_email_from'] = '[email protected]'
        gitlab_rails['smtp_authentication'] = "login"
        gitlab_rails['smtp_tls'] = true
        gitlab_rails['smtp_enable_starttls_auto'] = true
        gitlab_rails['smtp_openssl_verify_mode'] = 'peer'
        external_url 'http://gitlab.example.com/'
        gitlab_rails['gitlab_shell_ssh_port'] = 22222
    volumes:
      - gitlab_conf:/etc/gitlab
      - gitlab_logs:/var/log/gitlab
      - gitlab_data:/var/opt/gitlab
    deploy:
      mode: replicated
      replicas: 1
      placement:
        constraints:
        - node.role == manager
      labels:
        - traefik.enable=true
        - traefik.http.routers.gitlab.rule=Host(`gitlab.example.com`)
        - traefik.http.services.gitlab.loadbalancer.server.port=80
        - traefik.docker.network=traefik
  gitlab-runner:
    image: gitlab/gitlab-runner:latest
    networks:
      - gitlab
    volumes:
      - gitlab_runner_conf:/etc/gitlab
      - /var/run/docker.sock:/var/run/docker.sock
    deploy:
      mode: replicated
      replicas: 1
      placement:
        constraints:
        - node.role == manager

volumes:
  gitlab_conf:
    driver: local
    driver_opts:
      type: none
      o: bind
      device: "/srv/docker/gitlab/conf"
  gitlab_logs:
    driver: local
    driver_opts:
      type: none
      o: bind
      device: "/srv/docker/gitlab/logs"
  gitlab_data:
    driver: local
    driver_opts:
      type: none
      o: bind
      device: "/srv/docker/gitlab/data"
  gitlab_runner_conf:
    driver: local
    driver_opts:
      type: none
      o: bind
      device: "/srv/docker/gitlab/runner"

networks:
  pgsql:
    external: true
  redis:
    external: true
  traefik:
    external: true
  gitlab:
    external: true

# mkdir -p /srv/docker/gitlab/conf
# mkdir -p /srv/docker/gitlab/logs
# mkdir -p /srv/docker/gitlab/data
# mkdir -p /srv/docker/gitlab/runner
# docker stack deploy --compose-file 08gitlab-runner.yml gitlab

Hali ya mwisho ya kikundi na huduma:

# docker service ls
ID                  NAME                   MODE                REPLICAS            IMAGE                          PORTS
lef9n3m92buq        etcd_etcd1             replicated          1/1                 quay.io/coreos/etcd:latest
ij6uyyo792x5        etcd_etcd2             replicated          1/1                 quay.io/coreos/etcd:latest
fqttqpjgp6pp        etcd_etcd3             replicated          1/1                 quay.io/coreos/etcd:latest
hq5iyga28w33        gitlab_gitlab          replicated          1/1                 gitlab/gitlab-ce:latest        *:22222->22/tcp
dt7s6vs0q4qc        gitlab_gitlab-runner   replicated          1/1                 gitlab/gitlab-runner:latest
k7uoezno0h9n        pgsql_pgkeeper1        replicated          1/1                 sorintlab/stolon:master-pg10
cnrwul4r4nse        pgsql_pgkeeper2        replicated          1/1                 sorintlab/stolon:master-pg10
frflfnpty7tr        pgsql_pgkeeper3        replicated          1/1                 sorintlab/stolon:master-pg10
x7pqqchi52kq        pgsql_pgsentinel       replicated          3/3                 sorintlab/stolon:master-pg10
mwu2wl8fti4r        pgsql_postgresql       replicated          3/3                 sorintlab/stolon:master-pg10
9hkbe2vksbzb        redis_redis-master     global              3/3                 bitnami/redis:latest           *:6379->6379/tcp
l88zn8cla7dc        redis_redis-replica    replicated          3/3                 bitnami/redis:latest           *:30003->6379/tcp
1utp309xfmsy        redis_redis-sentinel   global              3/3                 bitnami/redis:latest           *:30002->16379/tcp
oteb824ylhyp        registry_registry      replicated          1/1                 registry:2.6
qovrah8nzzu8        traefik_traefik        replicated          3/3                 traefik:latest                 *:80->80/tcp, *:443->443/tcp

Ni nini kingine kinachoweza kuboreshwa? Hakikisha kuwa umeweka mipangilio ya Traefik kufanya kazi na vyombo vya https, ongeza usimbaji fiche wa tls kwa Postgresql na Redis. Lakini kwa ujumla, unaweza tayari kuwapa watengenezaji kama PoC. Wacha sasa tuangalie njia mbadala za Docker.

podman

Injini nyingine inayojulikana sana ya kuendesha vyombo vilivyowekwa pamoja na maganda (maganda, vikundi vya vyombo vilivyowekwa pamoja). Tofauti na Docker, hauitaji huduma yoyote kuendesha vyombo, kazi yote inafanywa kupitia maktaba ya libpod. Imeandikwa pia katika Go, inahitaji muda wa utekelezaji unaotii OCI ili kuendesha vyombo kama vile runC.

Docker na wote, wote, wote

Kufanya kazi na Podman kwa ujumla inafanana na ile ya Docker, kwa kiwango ambacho unaweza kuifanya kama hii (inadaiwa na wengi ambao wamejaribu, pamoja na mwandishi wa nakala hii):

$ alias docker=podman

na unaweza kuendelea kufanya kazi. Kwa ujumla, hali na Podman ni ya kufurahisha sana, kwa sababu ikiwa matoleo ya mapema ya Kubernetes yalifanya kazi na Docker, basi tangu karibu 2015, baada ya kusawazisha ulimwengu wa chombo (OCI - Open Container Initiative) na kugawa Docker ndani ya kontena na runC, mbadala wa Docker inatengenezwa ili kukimbia katika Kubernetes: CRI-O. Podman katika suala hili ni mbadala wa Docker, iliyojengwa juu ya kanuni za Kubernetes, ikiwa ni pamoja na kikundi cha vyombo, lakini lengo kuu la mradi ni kuendesha vyombo vya mtindo wa Docker bila huduma za ziada. Kwa sababu za wazi, hakuna hali ya pumba, kwani watengenezaji wanasema wazi kwamba ikiwa unahitaji nguzo, chukua Kubernetes.

Ufungaji

Ili kusakinisha kwenye Centos 7, wezesha tu hazina ya Ziada, kisha usakinishe kila kitu kwa amri:

# yum -y install podman

Vipengele vingine

Podman inaweza kutoa vitengo vya systemd, na hivyo kutatua shida ya kuanza vyombo baada ya kuwasha tena seva. Kwa kuongeza, systemd inatangazwa kufanya kazi kwa usahihi kama pid 1 kwenye kontena. Ili kuunda vyombo, kuna zana tofauti ya ujenzi, pia kuna zana za mtu wa tatu - analogues za docker-compose, ambayo pia hutoa faili za usanidi zinazolingana na Kubernetes, kwa hivyo mpito kutoka Podman hadi Kubernetes ni rahisi iwezekanavyo.

Kufanya kazi na Podman

Kwa kuwa hakuna hali ya pumba (inapaswa kubadili Kubernetes ikiwa nguzo inahitajika), tutaikusanya katika vyombo tofauti.

Sakinisha podman-compose:

# yum -y install python3-pip
# pip3 install podman-compose

Faili ya usanidi inayotokana na podman ni tofauti kidogo, kwani kwa mfano tulilazimika kuhamisha sehemu tofauti ya kiasi moja kwa moja kwenye sehemu ya huduma.

gitlab-podman.yml

version: '3.7'

services:
  gitlab:
    image: gitlab/gitlab-ce:latest
    hostname: gitlab.example.com
    restart: unless-stopped
    environment:
      GITLAB_OMNIBUS_CONFIG: |
        gitlab_rails['gitlab_shell_ssh_port'] = 22222
    ports:
      - "80:80"
      - "22222:22"
    volumes:
      - /srv/podman/gitlab/conf:/etc/gitlab
      - /srv/podman/gitlab/data:/var/opt/gitlab
      - /srv/podman/gitlab/logs:/var/log/gitlab
    networks:
      - gitlab

  gitlab-runner:
    image: gitlab/gitlab-runner:alpine
    restart: unless-stopped
    depends_on:
      - gitlab
    volumes:
      - /srv/podman/gitlab/runner:/etc/gitlab-runner
      - /var/run/docker.sock:/var/run/docker.sock
    networks:
      - gitlab

networks:
  gitlab:

# podman-compose -f gitlab-runner.yml -d up

Matokeo ya kazi:

# podman ps
CONTAINER ID  IMAGE                                  COMMAND               CREATED             STATUS                 PORTS                                      NAMES
da53da946c01  docker.io/gitlab/gitlab-runner:alpine  run --user=gitlab...  About a minute ago  Up About a minute ago  0.0.0.0:22222->22/tcp, 0.0.0.0:80->80/tcp  root_gitlab-runner_1
781c0103c94a  docker.io/gitlab/gitlab-ce:latest      /assets/wrapper       About a minute ago  Up About a minute ago  0.0.0.0:22222->22/tcp, 0.0.0.0:80->80/tcp  root_gitlab_1

Wacha tuone itazalisha nini kwa systemd na kubernetes, kwa hili tunahitaji kujua jina au kitambulisho cha pod:

# podman pod ls
POD ID         NAME   STATUS    CREATED          # OF CONTAINERS   INFRA ID
71fc2b2a5c63   root   Running   11 minutes ago   3                 db40ab8bf84b

Kubernetes:

# podman generate kube 71fc2b2a5c63
# Generation of Kubernetes YAML is still under development!
#
# Save the output of this file and use kubectl create -f to import
# it into Kubernetes.
#
# Created with podman-1.6.4
apiVersion: v1
kind: Pod
metadata:
  creationTimestamp: "2020-07-29T19:22:40Z"
  labels:
    app: root
  name: root
spec:
  containers:
  - command:
    - /assets/wrapper
    env:
    - name: PATH
      value: /opt/gitlab/embedded/bin:/opt/gitlab/bin:/assets:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
    - name: TERM
      value: xterm
    - name: HOSTNAME
      value: gitlab.example.com
    - name: container
      value: podman
    - name: GITLAB_OMNIBUS_CONFIG
      value: |
        gitlab_rails['gitlab_shell_ssh_port'] = 22222
    - name: LANG
      value: C.UTF-8
    image: docker.io/gitlab/gitlab-ce:latest
    name: rootgitlab1
    ports:
    - containerPort: 22
      hostPort: 22222
      protocol: TCP
    - containerPort: 80
      hostPort: 80
      protocol: TCP
    resources: {}
    securityContext:
      allowPrivilegeEscalation: true
      capabilities: {}
      privileged: false
      readOnlyRootFilesystem: false
    volumeMounts:
    - mountPath: /var/opt/gitlab
      name: srv-podman-gitlab-data
    - mountPath: /var/log/gitlab
      name: srv-podman-gitlab-logs
    - mountPath: /etc/gitlab
      name: srv-podman-gitlab-conf
    workingDir: /
  - command:
    - run
    - --user=gitlab-runner
    - --working-directory=/home/gitlab-runner
    env:
    - name: PATH
      value: /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
    - name: TERM
      value: xterm
    - name: HOSTNAME
    - name: container
      value: podman
    image: docker.io/gitlab/gitlab-runner:alpine
    name: rootgitlab-runner1
    resources: {}
    securityContext:
      allowPrivilegeEscalation: true
      capabilities: {}
      privileged: false
      readOnlyRootFilesystem: false
    volumeMounts:
    - mountPath: /etc/gitlab-runner
      name: srv-podman-gitlab-runner
    - mountPath: /var/run/docker.sock
      name: var-run-docker.sock
    workingDir: /
  volumes:
  - hostPath:
      path: /srv/podman/gitlab/runner
      type: Directory
    name: srv-podman-gitlab-runner
  - hostPath:
      path: /var/run/docker.sock
      type: File
    name: var-run-docker.sock
  - hostPath:
      path: /srv/podman/gitlab/data
      type: Directory
    name: srv-podman-gitlab-data
  - hostPath:
      path: /srv/podman/gitlab/logs
      type: Directory
    name: srv-podman-gitlab-logs
  - hostPath:
      path: /srv/podman/gitlab/conf
      type: Directory
    name: srv-podman-gitlab-conf
status: {}

systemd:

# podman generate systemd 71fc2b2a5c63
# pod-71fc2b2a5c6346f0c1c86a2dc45dbe78fa192ea02aac001eb8347ccb8c043c26.service
# autogenerated by Podman 1.6.4
# Thu Jul 29 15:23:28 EDT 2020

[Unit]
Description=Podman pod-71fc2b2a5c6346f0c1c86a2dc45dbe78fa192ea02aac001eb8347ccb8c043c26.service
Documentation=man:podman-generate-systemd(1)
Requires=container-781c0103c94aaa113c17c58d05ddabf8df4bf39707b664abcf17ed2ceff467d3.service container-da53da946c01449f500aa5296d9ea6376f751948b17ca164df438b7df6607864.service
Before=container-781c0103c94aaa113c17c58d05ddabf8df4bf39707b664abcf17ed2ceff467d3.service container-da53da946c01449f500aa5296d9ea6376f751948b17ca164df438b7df6607864.service

[Service]
Restart=on-failure
ExecStart=/usr/bin/podman start db40ab8bf84bf35141159c26cb6e256b889c7a98c0418eee3c4aa683c14fccaa
ExecStop=/usr/bin/podman stop -t 10 db40ab8bf84bf35141159c26cb6e256b889c7a98c0418eee3c4aa683c14fccaa
KillMode=none
Type=forking
PIDFile=/var/run/containers/storage/overlay-containers/db40ab8bf84bf35141159c26cb6e256b889c7a98c0418eee3c4aa683c14fccaa/userdata/conmon.pid

[Install]
WantedBy=multi-user.target
# container-da53da946c01449f500aa5296d9ea6376f751948b17ca164df438b7df6607864.service
# autogenerated by Podman 1.6.4
# Thu Jul 29 15:23:28 EDT 2020

[Unit]
Description=Podman container-da53da946c01449f500aa5296d9ea6376f751948b17ca164df438b7df6607864.service
Documentation=man:podman-generate-systemd(1)
RefuseManualStart=yes
RefuseManualStop=yes
BindsTo=pod-71fc2b2a5c6346f0c1c86a2dc45dbe78fa192ea02aac001eb8347ccb8c043c26.service
After=pod-71fc2b2a5c6346f0c1c86a2dc45dbe78fa192ea02aac001eb8347ccb8c043c26.service

[Service]
Restart=on-failure
ExecStart=/usr/bin/podman start da53da946c01449f500aa5296d9ea6376f751948b17ca164df438b7df6607864
ExecStop=/usr/bin/podman stop -t 10 da53da946c01449f500aa5296d9ea6376f751948b17ca164df438b7df6607864
KillMode=none
Type=forking
PIDFile=/var/run/containers/storage/overlay-containers/da53da946c01449f500aa5296d9ea6376f751948b17ca164df438b7df6607864/userdata/conmon.pid

[Install]
WantedBy=multi-user.target
# container-781c0103c94aaa113c17c58d05ddabf8df4bf39707b664abcf17ed2ceff467d3.service
# autogenerated by Podman 1.6.4
# Thu Jul 29 15:23:28 EDT 2020

[Unit]
Description=Podman container-781c0103c94aaa113c17c58d05ddabf8df4bf39707b664abcf17ed2ceff467d3.service
Documentation=man:podman-generate-systemd(1)
RefuseManualStart=yes
RefuseManualStop=yes
BindsTo=pod-71fc2b2a5c6346f0c1c86a2dc45dbe78fa192ea02aac001eb8347ccb8c043c26.service
After=pod-71fc2b2a5c6346f0c1c86a2dc45dbe78fa192ea02aac001eb8347ccb8c043c26.service

[Service]
Restart=on-failure
ExecStart=/usr/bin/podman start 781c0103c94aaa113c17c58d05ddabf8df4bf39707b664abcf17ed2ceff467d3
ExecStop=/usr/bin/podman stop -t 10 781c0103c94aaa113c17c58d05ddabf8df4bf39707b664abcf17ed2ceff467d3
KillMode=none
Type=forking
PIDFile=/var/run/containers/storage/overlay-containers/781c0103c94aaa113c17c58d05ddabf8df4bf39707b664abcf17ed2ceff467d3/userdata/conmon.pid

[Install]
WantedBy=multi-user.target

Kwa bahati mbaya, mbali na vyombo vinavyoendesha, kitengo kilichotengenezwa kwa systemd hakifanyi chochote kingine (kwa mfano, kusafisha vyombo vya zamani wakati huduma kama hiyo imeanzishwa tena), kwa hivyo itabidi uongeze vitu kama hivyo mwenyewe.

Kimsingi, Podman inatosha kujaribu vyombo ni nini, kuhamisha usanidi wa zamani kwa docker-compose, na kisha kwenda Kubernetes, ikiwa ni lazima, kwenye nguzo, au kupata mbadala rahisi kutumia kwa Docker.

rkt

Mradi imeenda kwenye kumbukumbu karibu miezi sita iliyopita kutokana na ukweli kwamba RedHat ilinunua, kwa hiyo sitakaa juu yake kwa undani zaidi. Kwa ujumla, iliacha hisia nzuri sana, lakini ikilinganishwa na Docker, na hata zaidi kwa Podman, inaonekana kama mchanganyiko. Pia kulikuwa na usambazaji wa CoreOS uliojengwa juu ya rkt (ingawa hapo awali walikuwa na Docker), lakini hiyo pia iliisha baada ya ununuzi wa RedHat.

Mwako

Zaidi mradi mmoja, mwandishi ambaye alitaka tu kujenga na kuendesha vyombo. Kwa kuzingatia nyaraka na kanuni, mwandishi hakufuata viwango, lakini aliamua tu kuandika utekelezaji wake mwenyewe, ambao, kwa kanuni, alifanya.

Matokeo

Hali na Kubernetes ni ya kuvutia sana: kwa upande mmoja, na Docker, unaweza kukusanya nguzo (katika hali ya kundi), ambayo unaweza hata kuendesha mazingira ya uzalishaji kwa wateja, hii ni kweli hasa kwa timu ndogo (watu 3-5). ), au kwa mzigo mdogo wa jumla , au ukosefu wa hamu ya kuelewa ugumu wa kuanzisha Kubernetes, ikiwa ni pamoja na kwa mizigo ya juu.

Podman haitoi utangamano kamili, lakini ina faida moja muhimu - utangamano na Kubernetes, ikiwa ni pamoja na zana za ziada (buildah na wengine). Kwa hivyo, nitakaribia uchaguzi wa zana ya kufanya kazi kama ifuatavyo: kwa timu ndogo, au na bajeti ndogo - Docker (na hali inayowezekana ya kundi), kwa kujiendeleza mwenyewe kwenye mwenyeji wa kibinafsi - wandugu wa Podman, na kwa kila mtu mwingine. - Kubernetes.

Sina hakika kuwa hali na Docker haitabadilika katika siku zijazo, baada ya yote, wao ni waanzilishi, na pia wanasawazisha polepole hatua kwa hatua, lakini Podman, na mapungufu yake yote (inafanya kazi tu kwenye Linux, hakuna nguzo, mkusanyiko. na vitendo vingine ni maamuzi ya wahusika wengine) siku zijazo ni wazi zaidi, kwa hivyo ninakaribisha kila mtu kujadili matokeo haya kwenye maoni.

PS Tarehe 3 Agosti tunazindua "Kozi ya video ya Dockerambapo unaweza kujifunza zaidi kuhusu kazi yake. Tutachambua zana zake zote: kutoka kwa vifupisho vya msingi hadi vigezo vya mtandao, nuances ya kufanya kazi na mifumo mbalimbali ya uendeshaji na lugha za programu. Utafahamiana na teknolojia na utaelewa wapi na jinsi bora ya kutumia Docker. Pia tutashiriki kesi bora za utendakazi.

Gharama ya kuagiza mapema kabla ya kutolewa: rubles 5000. Programu "Kozi ya Video ya Docker" inaweza kupatikana kwenye ukurasa wa kozi.

Chanzo: mapenzi.com

Kuongeza maoni