Wéi funktionnéiert kubectl exec?

Note. iwwersat.: den Auteur vum Artikel - Erkan Erol, en Ingenieur vu SAP - deelt seng Studie iwwer d'Mechanismen vum Teamfunktioun kubectl exec, sou vertraut fir jiddereen dee mat Kubernetes schafft. Hie begleet de ganzen Algorithmus mat Oplëschte vum Kubernetes Quellcode (a verbonne Projeten), déi Iech erlaben d'Thema esou déif ze verstoen wéi néideg.

Wéi funktionnéiert kubectl exec?

E Freideg ass e Kolleg bei mech komm a gefrot wéi een e Kommando an engem Pod ausféiert benotzt Client-goen. Ech konnt him net beäntweren an op eemol gemierkt datt ech näischt iwwer de Mechanismus vun der Operatioun wousst kubectl exec. Jo, ech hat verschidde Iddien iwwer seng Struktur, awer ech war net 100% sécher op hir Richtegkeet an hunn dofir decidéiert dëst Thema unzegoen. Nodeems ech Blogs, Dokumentatioun a Quellcode studéiert hunn, hunn ech vill nei Saachen geléiert, an an dësem Artikel wëll ech meng Entdeckungen a Verständnis deelen. Wann eppes falsch ass, kontaktéiert mech w.e.g. um Twitter.

Virbereedung

Fir e Cluster op engem MacBook ze kreéieren, hunn ech gekloont ecomm-integration-ballerina/kubernetes-cluster. Duerno hunn ech d'IP Adressen vun den Noden an der Kubelet Configuratioun korrigéiert, well d'Standardastellungen net erlaabt hunn kubectl exec. Dir kënnt méi iwwer den Haaptgrond dofir liesen hei.

  • All Auto = meng MacBook
  • Master Node IP = 192.168.205.10
  • Aarbechter Node IP = 192.168.205.11
  • API Server Port = 6443

Komponenten

Wéi funktionnéiert kubectl exec?

  • kubectl exec Prozess: Wa mir "kubectl exec ..." ausféieren, gëtt de Prozess gestart. Dëst kann op all Maschinn mat Zougang zu der K8s API Server gemaach ginn. Note Iwwersetzung: Weider an der Konsolelëscht benotzt den Auteur de Kommentar "all Maschinn", wat implizéiert datt spéider Kommandoen op sou Maschinnen mat Zougang zu Kubernetes ausgefouert kënne ginn.
  • api Server: E Komponent op de Master Node deen Zougang zu der Kubernetes API gëtt. Dëst ass de Frontend fir de Kontrollfliger zu Kubernetes.
  • kubelet: En Agent deen op all Node am Cluster leeft. Et garantéiert d'Operatioun vu Container am Pod.
  • Container Lafzäit (Container Runtime): D'Software déi verantwortlech ass fir Container ze lafen. Beispiller: Docker, CRI-O, Containerd ...
  • sätzt: OS Kernel am Aarbechter Node; ass responsabel fir Prozess Gestioun.
  • Zil gesat (Zil) Container: e Container deen Deel vun engem Pod ass an op ee vun den Aarbechternoden leeft.

Wat ech entdeckt hunn

1. Client Säit Aktivitéit

Erstellt e Pod an engem Nummraum default:

// any machine
$ kubectl run exec-test-nginx --image=nginx

Da féiere mir den Exec Kommando aus a waart 5000 Sekonnen fir weider Observatioune:

// any machine
$ kubectl exec -it exec-test-nginx-6558988d5-fgxgg -- sh
# sleep 5000

De kubectl Prozess erschéngt (mat pid = 8507 an eisem Fall):

// any machine
$ ps -ef |grep kubectl
501  8507  8409   0  7:19PM ttys000    0:00.13 kubectl exec -it exec-test-nginx-6558988d5-fgxgg -- sh

Wa mir d'Netzwierkaktivitéit vum Prozess iwwerpréiwen, fanne mir datt et Verbindunge mam API-Server (192.168.205.10.6443) huet:

// any machine
$ netstat -atnv |grep 8507
tcp4       0      0  192.168.205.1.51673    192.168.205.10.6443    ESTABLISHED 131072 131768   8507      0 0x0102 0x00000020
tcp4       0      0  192.168.205.1.51672    192.168.205.10.6443    ESTABLISHED 131072 131768   8507      0 0x0102 0x00000028

Loosst eis de Code kucken. Kubectl erstellt eng POST Ufro mat der Exec Subressource a schéckt eng REST Ufro:

              req := restClient.Post().
                        Resource("pods").
                        Name(pod.Name).
                        Namespace(pod.Namespace).
                        SubResource("exec")
                req.VersionedParams(&corev1.PodExecOptions{
                        Container: containerName,
                        Command:   p.Command,
                        Stdin:     p.Stdin,
                        Stdout:    p.Out != nil,
                        Stderr:    p.ErrOut != nil,
                        TTY:       t.Raw,
                }, scheme.ParameterCodec)

                return p.Executor.Execute("POST", req.URL(), p.Config, p.In, p.Out, p.ErrOut, t.Raw, sizeQueue)

(kubectl/pkg/cmd/exec/exec.go)

Wéi funktionnéiert kubectl exec?

2. Aktivitéit op der Meeschtesch Node Säit

Mir kënnen och d'Ufro op der API-Server Säit beobachten:

handler.go:143] kube-apiserver: POST "/api/v1/namespaces/default/pods/exec-test-nginx-6558988d5-fgxgg/exec" satisfied by gorestful with webservice /api/v1
upgradeaware.go:261] Connecting to backend proxy (intercepting redirects) https://192.168.205.11:10250/exec/default/exec-test-nginx-6558988d5-fgxgg/exec-test-nginx?command=sh&input=1&output=1&tty=1
Headers: map[Connection:[Upgrade] Content-Length:[0] Upgrade:[SPDY/3.1] User-Agent:[kubectl/v1.12.10 (darwin/amd64) kubernetes/e3c1340] X-Forwarded-For:[192.168.205.1] X-Stream-Protocol-Version:[v4.channel.k8s.io v3.channel.k8s.io v2.channel.k8s.io channel.k8s.io]]

Notéiert datt d'HTTP Ufro eng Ufro enthält fir de Protokoll z'änneren. SPDY erlaabt Iech individuell stdin / stdout / stderr / spdy-Fehler "Baachen" iwwer eng eenzeg TCP Verbindung ze multiplex.

Den API Server kritt d'Ufro an konvertéiert se an PodExecOptions:

// PodExecOptions is the query options to a Pod's remote exec call
type PodExecOptions struct {
        metav1.TypeMeta

        // Stdin if true indicates that stdin is to be redirected for the exec call
        Stdin bool

        // Stdout if true indicates that stdout is to be redirected for the exec call
        Stdout bool

        // Stderr if true indicates that stderr is to be redirected for the exec call
        Stderr bool

        // TTY if true indicates that a tty will be allocated for the exec call
        TTY bool

        // Container in which to execute the command.
        Container string

        // Command is the remote command to execute; argv array; not executed within a shell.
        Command []string
}

(pkg/apis/core/types.go)

Fir déi erfuerderlech Handlungen auszeféieren, muss den API-Server wësse wéi ee Pod et muss kontaktéieren:

// ExecLocation returns the exec URL for a pod container. If opts.Container is blank
// and only one container is present in the pod, that container is used.
func ExecLocation(
        getter ResourceGetter,
        connInfo client.ConnectionInfoGetter,
        ctx context.Context,
        name string,
        opts *api.PodExecOptions,
) (*url.URL, http.RoundTripper, error) {
        return streamLocation(getter, connInfo, ctx, name, opts, opts.Container, "exec")
}

(pkg/registry/core/pod/strategy.go)

Natierlech ginn d'Donnéeën iwwer den Endpunkt aus Informatioun iwwer den Node geholl:

        nodeName := types.NodeName(pod.Spec.NodeName)
        if len(nodeName) == 0 {
                // If pod has not been assigned a host, return an empty location
                return nil, nil, errors.NewBadRequest(fmt.Sprintf("pod %s does not have a host assigned", name))
        }
        nodeInfo, err := connInfo.GetConnectionInfo(ctx, nodeName)

(pkg/registry/core/pod/strategy.go)

Hour! De Kubelet huet elo en Hafen (node.Status.DaemonEndpoints.KubeletEndpoint.Port), op deen den API Server ka verbannen:

// GetConnectionInfo retrieves connection info from the status of a Node API object.
func (k *NodeConnectionInfoGetter) GetConnectionInfo(ctx context.Context, nodeName types.NodeName) (*ConnectionInfo, error) {
        node, err := k.nodes.Get(ctx, string(nodeName), metav1.GetOptions{})
        if err != nil {
                return nil, err
        }

        // Find a kubelet-reported address, using preferred address type
        host, err := nodeutil.GetPreferredNodeAddress(node, k.preferredAddressTypes)
        if err != nil {
                return nil, err
        }

        // Use the kubelet-reported port, if present
        port := int(node.Status.DaemonEndpoints.KubeletEndpoint.Port)
        if port <= 0 {
                port = k.defaultPort
        }

        return &ConnectionInfo{
                Scheme:    k.scheme,
                Hostname:  host,
                Port:      strconv.Itoa(port),
                Transport: k.transport,
        }, nil
}

(pkg/kubelet/client/kubelet_client.go)

Vun der Dokumentatioun Master-Node Kommunikatioun> Master zu Cluster> apiserver zu kubelet:

Dës Verbindunge ginn op den HTTPS Endpunkt vum Kubelet gemaach. Par défaut verifizéiert den Apiserver den Zertifikat vum Kubelet net, wat d'Verbindung vulnérabel mécht fir Man-in-the-Middle (MITM) Attacken an onsécher fir an onzouverlässeg an/oder ëffentlechen Netzwierker ze schaffen.

Elo kennt den API Server den Endpunkt an stellt d'Verbindung op:

// Connect returns a handler for the pod exec proxy
func (r *ExecREST) Connect(ctx context.Context, name string, opts runtime.Object, responder rest.Responder) (http.Handler, error) {
        execOpts, ok := opts.(*api.PodExecOptions)
        if !ok {
                return nil, fmt.Errorf("invalid options object: %#v", opts)
        }
        location, transport, err := pod.ExecLocation(r.Store, r.KubeletConn, ctx, name, execOpts)
        if err != nil {
                return nil, err
        }
        return newThrottledUpgradeAwareProxyHandler(location, transport, false, true, true, responder), nil
}

(pkg/registry/core/pod/rest/subresources.go)

Loosst eis kucken wat um Master Node geschitt.

Als éischt fanne mir d'IP vum Aarbechter Node eraus. An eisem Fall ass et 192.168.205.11:

// any machine
$ kubectl get nodes k8s-node-1 -o wide
NAME         STATUS   ROLES    AGE   VERSION   INTERNAL-IP      EXTERNAL-IP   OS-IMAGE             KERNEL-VERSION      CONTAINER-RUNTIME
k8s-node-1   Ready    <none>   9h    v1.15.3   192.168.205.11   <none>        Ubuntu 16.04.6 LTS   4.4.0-159-generic   docker://17.3.3

Da set de Kubelet Hafen (10250 an eisem Fall):

// any machine
$ kubectl get nodes k8s-node-1 -o jsonpath='{.status.daemonEndpoints.kubeletEndpoint}'
map[Port:10250]

Elo ass et Zäit d'Netz ze kontrolléieren. Gëtt et eng Verbindung zum Aarbechter Node (192.168.205.11)? Et ass! Wann Dir e Prozess ëmbréngen exec, et wäert verschwannen, also weess ech datt d'Verbindung vum api-Server als Resultat vum exec Kommando gegrënnt gouf.

// master node
$ netstat -atn |grep 192.168.205.11
tcp        0      0 192.168.205.10:37870    192.168.205.11:10250    ESTABLISHED
…

Wéi funktionnéiert kubectl exec?

D'Verbindung tëscht kubectl an api-Server ass nach ëmmer op. Zousätzlech gëtt et eng aner Verbindung déi Api-Server a Kubelet verbënnt.

3. Aktivitéit op der Aarbechter Node

Loosst eis elo mam Aarbechter Node verbannen a kucken wat dorop geschitt.

Éischt vun all, mir gesinn, datt d'Verbindung et och etabléiert ass (zweet Linn); 192.168.205.10 ass d'IP vum Master Node:

 // worker node
  $ netstat -atn |grep 10250
  tcp6       0      0 :::10250                :::*                    LISTEN
  tcp6       0      0 192.168.205.11:10250    192.168.205.10:37870    ESTABLISHED

Wat iwwer eis Equipe sleep? Hurra, si ass och do!

 // worker node
  $ ps -afx
  ...
  31463 ?        Sl     0:00      _ docker-containerd-shim 7d974065bbb3107074ce31c51f5ef40aea8dcd535ae11a7b8f2dd180b8ed583a /var/run/docker/libcontainerd/7d974065bbb3107074ce31c51
  31478 pts/0    Ss     0:00          _ sh
  31485 pts/0    S+     0:00              _ sleep 5000
  …

Awer waart: wéi huet de Kubelet dat ofgezunn? De Kubelet huet en Daemon deen Zougang zu der API iwwer den Hafen fir Api-Server Ufroen ubitt:

// Server is the library interface to serve the stream requests.
type Server interface {
        http.Handler

        // Get the serving URL for the requests.
        // Requests must not be nil. Responses may be nil iff an error is returned.
        GetExec(*runtimeapi.ExecRequest) (*runtimeapi.ExecResponse, error)
        GetAttach(req *runtimeapi.AttachRequest) (*runtimeapi.AttachResponse, error)
        GetPortForward(*runtimeapi.PortForwardRequest) (*runtimeapi.PortForwardResponse, error)

        // Start the server.
        // addr is the address to serve on (address:port) stayUp indicates whether the server should
        // listen until Stop() is called, or automatically stop after all expected connections are
        // closed. Calling Get{Exec,Attach,PortForward} increments the expected connection count.
        // Function does not return until the server is stopped.
        Start(stayUp bool) error
        // Stop the server, and terminate any open connections.
        Stop() error
}

(pkg/kubelet/server/streaming/server.go)

Kubelet berechent den Äntwertendpunkt fir Exekutiounsufroen:

func (s *server) GetExec(req *runtimeapi.ExecRequest) (*runtimeapi.ExecResponse, error) {
        if err := validateExecRequest(req); err != nil {
                return nil, err
        }
        token, err := s.cache.Insert(req)
        if err != nil {
                return nil, err
        }
        return &runtimeapi.ExecResponse{
                Url: s.buildURL("exec", token),
        }, nil
}

(pkg/kubelet/server/streaming/server.go)

Gitt net duercherneen. Et gëtt net d'Resultat vum Kommando zréck, awer den Endpunkt fir d'Kommunikatioun:

type ExecResponse struct {
        // Fully qualified URL of the exec streaming server.
        Url                  string   `protobuf:"bytes,1,opt,name=url,proto3" json:"url,omitempty"`
        XXX_NoUnkeyedLiteral struct{} `json:"-"`
        XXX_sizecache        int32    `json:"-"`
}

(cri-api/pkg/apis/runtime/v1alpha2/api.pb.go)

Kubelet implementéiert den Interface RuntimeServiceClient, deen Deel vum Container Runtime Interface ass (mir hu méi doriwwer geschriwwen, z.B. hei — ca. Iwwersetzung):

Laang Oplëschtung vu cri-api a kubernetes / kubernetes

// For semantics around ctx use and closing/ending streaming RPCs, please refer to https://godoc.org/google.golang.org/grpc#ClientConn.NewStream.
type RuntimeServiceClient interface {
        // Version returns the runtime name, runtime version, and runtime API version.
        Version(ctx context.Context, in *VersionRequest, opts ...grpc.CallOption) (*VersionResponse, error)
        // RunPodSandbox creates and starts a pod-level sandbox. Runtimes must ensure
        // the sandbox is in the ready state on success.
        RunPodSandbox(ctx context.Context, in *RunPodSandboxRequest, opts ...grpc.CallOption) (*RunPodSandboxResponse, error)
        // StopPodSandbox stops any running process that is part of the sandbox and
        // reclaims network resources (e.g., IP addresses) allocated to the sandbox.
        // If there are any running containers in the sandbox, they must be forcibly
        // terminated.
        // This call is idempotent, and must not return an error if all relevant
        // resources have already been reclaimed. kubelet will call StopPodSandbox
        // at least once before calling RemovePodSandbox. It will also attempt to
        // reclaim resources eagerly, as soon as a sandbox is not needed. Hence,
        // multiple StopPodSandbox calls are expected.
        StopPodSandbox(ctx context.Context, in *StopPodSandboxRequest, opts ...grpc.CallOption) (*StopPodSandboxResponse, error)
        // RemovePodSandbox removes the sandbox. If there are any running containers
        // in the sandbox, they must be forcibly terminated and removed.
        // This call is idempotent, and must not return an error if the sandbox has
        // already been removed.
        RemovePodSandbox(ctx context.Context, in *RemovePodSandboxRequest, opts ...grpc.CallOption) (*RemovePodSandboxResponse, error)
        // PodSandboxStatus returns the status of the PodSandbox. If the PodSandbox is not
        // present, returns an error.
        PodSandboxStatus(ctx context.Context, in *PodSandboxStatusRequest, opts ...grpc.CallOption) (*PodSandboxStatusResponse, error)
        // ListPodSandbox returns a list of PodSandboxes.
        ListPodSandbox(ctx context.Context, in *ListPodSandboxRequest, opts ...grpc.CallOption) (*ListPodSandboxResponse, error)
        // CreateContainer creates a new container in specified PodSandbox
        CreateContainer(ctx context.Context, in *CreateContainerRequest, opts ...grpc.CallOption) (*CreateContainerResponse, error)
        // StartContainer starts the container.
        StartContainer(ctx context.Context, in *StartContainerRequest, opts ...grpc.CallOption) (*StartContainerResponse, error)
        // StopContainer stops a running container with a grace period (i.e., timeout).
        // This call is idempotent, and must not return an error if the container has
        // already been stopped.
        // TODO: what must the runtime do after the grace period is reached?
        StopContainer(ctx context.Context, in *StopContainerRequest, opts ...grpc.CallOption) (*StopContainerResponse, error)
        // RemoveContainer removes the container. If the container is running, the
        // container must be forcibly removed.
        // This call is idempotent, and must not return an error if the container has
        // already been removed.
        RemoveContainer(ctx context.Context, in *RemoveContainerRequest, opts ...grpc.CallOption) (*RemoveContainerResponse, error)
        // ListContainers lists all containers by filters.
        ListContainers(ctx context.Context, in *ListContainersRequest, opts ...grpc.CallOption) (*ListContainersResponse, error)
        // ContainerStatus returns status of the container. If the container is not
        // present, returns an error.
        ContainerStatus(ctx context.Context, in *ContainerStatusRequest, opts ...grpc.CallOption) (*ContainerStatusResponse, error)
        // UpdateContainerResources updates ContainerConfig of the container.
        UpdateContainerResources(ctx context.Context, in *UpdateContainerResourcesRequest, opts ...grpc.CallOption) (*UpdateContainerResourcesResponse, error)
        // ReopenContainerLog asks runtime to reopen the stdout/stderr log file
        // for the container. This is often called after the log file has been
        // rotated. If the container is not running, container runtime can choose
        // to either create a new log file and return nil, or return an error.
        // Once it returns error, new container log file MUST NOT be created.
        ReopenContainerLog(ctx context.Context, in *ReopenContainerLogRequest, opts ...grpc.CallOption) (*ReopenContainerLogResponse, error)
        // ExecSync runs a command in a container synchronously.
        ExecSync(ctx context.Context, in *ExecSyncRequest, opts ...grpc.CallOption) (*ExecSyncResponse, error)
        // Exec prepares a streaming endpoint to execute a command in the container.
        Exec(ctx context.Context, in *ExecRequest, opts ...grpc.CallOption) (*ExecResponse, error)
        // Attach prepares a streaming endpoint to attach to a running container.
        Attach(ctx context.Context, in *AttachRequest, opts ...grpc.CallOption) (*AttachResponse, error)
        // PortForward prepares a streaming endpoint to forward ports from a PodSandbox.
        PortForward(ctx context.Context, in *PortForwardRequest, opts ...grpc.CallOption) (*PortForwardResponse, error)
        // ContainerStats returns stats of the container. If the container does not
        // exist, the call returns an error.
        ContainerStats(ctx context.Context, in *ContainerStatsRequest, opts ...grpc.CallOption) (*ContainerStatsResponse, error)
        // ListContainerStats returns stats of all running containers.
        ListContainerStats(ctx context.Context, in *ListContainerStatsRequest, opts ...grpc.CallOption) (*ListContainerStatsResponse, error)
        // UpdateRuntimeConfig updates the runtime configuration based on the given request.
        UpdateRuntimeConfig(ctx context.Context, in *UpdateRuntimeConfigRequest, opts ...grpc.CallOption) (*UpdateRuntimeConfigResponse, error)
        // Status returns the status of the runtime.
        Status(ctx context.Context, in *StatusRequest, opts ...grpc.CallOption) (*StatusResponse, error)
}

(cri-api/pkg/apis/runtime/v1alpha2/api.pb.go)
Et benotzt einfach gRPC fir eng Method iwwer de Container Runtime Interface ze ruffen:

type runtimeServiceClient struct {
        cc *grpc.ClientConn
}

(cri-api/pkg/apis/runtime/v1alpha2/api.pb.go)

func (c *runtimeServiceClient) Exec(ctx context.Context, in *ExecRequest, opts ...grpc.CallOption) (*ExecResponse, error) {
        out := new(ExecResponse)
        err := c.cc.Invoke(ctx, "/runtime.v1alpha2.RuntimeService/Exec", in, out, opts...)
        if err != nil {
                return nil, err
        }
        return out, nil
}

(cri-api/pkg/apis/runtime/v1alpha2/api.pb.go)

Container Runtime ass verantwortlech fir d'Ëmsetzung RuntimeServiceServer:

Laang Oplëschtung vu cri-api a kubernetes / kubernetes

// RuntimeServiceServer is the server API for RuntimeService service.
type RuntimeServiceServer interface {
        // Version returns the runtime name, runtime version, and runtime API version.
        Version(context.Context, *VersionRequest) (*VersionResponse, error)
        // RunPodSandbox creates and starts a pod-level sandbox. Runtimes must ensure
        // the sandbox is in the ready state on success.
        RunPodSandbox(context.Context, *RunPodSandboxRequest) (*RunPodSandboxResponse, error)
        // StopPodSandbox stops any running process that is part of the sandbox and
        // reclaims network resources (e.g., IP addresses) allocated to the sandbox.
        // If there are any running containers in the sandbox, they must be forcibly
        // terminated.
        // This call is idempotent, and must not return an error if all relevant
        // resources have already been reclaimed. kubelet will call StopPodSandbox
        // at least once before calling RemovePodSandbox. It will also attempt to
        // reclaim resources eagerly, as soon as a sandbox is not needed. Hence,
        // multiple StopPodSandbox calls are expected.
        StopPodSandbox(context.Context, *StopPodSandboxRequest) (*StopPodSandboxResponse, error)
        // RemovePodSandbox removes the sandbox. If there are any running containers
        // in the sandbox, they must be forcibly terminated and removed.
        // This call is idempotent, and must not return an error if the sandbox has
        // already been removed.
        RemovePodSandbox(context.Context, *RemovePodSandboxRequest) (*RemovePodSandboxResponse, error)
        // PodSandboxStatus returns the status of the PodSandbox. If the PodSandbox is not
        // present, returns an error.
        PodSandboxStatus(context.Context, *PodSandboxStatusRequest) (*PodSandboxStatusResponse, error)
        // ListPodSandbox returns a list of PodSandboxes.
        ListPodSandbox(context.Context, *ListPodSandboxRequest) (*ListPodSandboxResponse, error)
        // CreateContainer creates a new container in specified PodSandbox
        CreateContainer(context.Context, *CreateContainerRequest) (*CreateContainerResponse, error)
        // StartContainer starts the container.
        StartContainer(context.Context, *StartContainerRequest) (*StartContainerResponse, error)
        // StopContainer stops a running container with a grace period (i.e., timeout).
        // This call is idempotent, and must not return an error if the container has
        // already been stopped.
        // TODO: what must the runtime do after the grace period is reached?
        StopContainer(context.Context, *StopContainerRequest) (*StopContainerResponse, error)
        // RemoveContainer removes the container. If the container is running, the
        // container must be forcibly removed.
        // This call is idempotent, and must not return an error if the container has
        // already been removed.
        RemoveContainer(context.Context, *RemoveContainerRequest) (*RemoveContainerResponse, error)
        // ListContainers lists all containers by filters.
        ListContainers(context.Context, *ListContainersRequest) (*ListContainersResponse, error)
        // ContainerStatus returns status of the container. If the container is not
        // present, returns an error.
        ContainerStatus(context.Context, *ContainerStatusRequest) (*ContainerStatusResponse, error)
        // UpdateContainerResources updates ContainerConfig of the container.
        UpdateContainerResources(context.Context, *UpdateContainerResourcesRequest) (*UpdateContainerResourcesResponse, error)
        // ReopenContainerLog asks runtime to reopen the stdout/stderr log file
        // for the container. This is often called after the log file has been
        // rotated. If the container is not running, container runtime can choose
        // to either create a new log file and return nil, or return an error.
        // Once it returns error, new container log file MUST NOT be created.
        ReopenContainerLog(context.Context, *ReopenContainerLogRequest) (*ReopenContainerLogResponse, error)
        // ExecSync runs a command in a container synchronously.
        ExecSync(context.Context, *ExecSyncRequest) (*ExecSyncResponse, error)
        // Exec prepares a streaming endpoint to execute a command in the container.
        Exec(context.Context, *ExecRequest) (*ExecResponse, error)
        // Attach prepares a streaming endpoint to attach to a running container.
        Attach(context.Context, *AttachRequest) (*AttachResponse, error)
        // PortForward prepares a streaming endpoint to forward ports from a PodSandbox.
        PortForward(context.Context, *PortForwardRequest) (*PortForwardResponse, error)
        // ContainerStats returns stats of the container. If the container does not
        // exist, the call returns an error.
        ContainerStats(context.Context, *ContainerStatsRequest) (*ContainerStatsResponse, error)
        // ListContainerStats returns stats of all running containers.
        ListContainerStats(context.Context, *ListContainerStatsRequest) (*ListContainerStatsResponse, error)
        // UpdateRuntimeConfig updates the runtime configuration based on the given request.
        UpdateRuntimeConfig(context.Context, *UpdateRuntimeConfigRequest) (*UpdateRuntimeConfigResponse, error)
        // Status returns the status of the runtime.
        Status(context.Context, *StatusRequest) (*StatusResponse, error)
}

(cri-api/pkg/apis/runtime/v1alpha2/api.pb.go)
Wéi funktionnéiert kubectl exec?

Wa jo, sollte mir eng Verbindung tëscht dem Kubelet an der Container Runtime gesinn, richteg? Loosst eis kucken.

Fëllt dëse Kommando virum an nom Exec Kommando a kuckt d'Ënnerscheeder. A mengem Fall ass den Ënnerscheed:

// worker node
$ ss -a -p |grep kubelet
...
u_str  ESTAB      0      0       * 157937                * 157387                users:(("kubelet",pid=5714,fd=33))
...

Hmmm... Eng nei Verbindung iwwer Unix Sockets tëscht engem Kubelet (pid=5714) an eppes Onbekannten. Wat kéint et sinn? Dat ass richteg, et ass Docker (pid=1186)!

// worker node
$ ss -a -p |grep 157387
...
u_str  ESTAB      0      0       * 157937                * 157387                users:(("kubelet",pid=5714,fd=33))
u_str  ESTAB      0      0      /var/run/docker.sock 157387                * 157937                users:(("dockerd",pid=1186,fd=14))
...

Wéi Dir Iech erënnert, ass dëst den Docker Daemon Prozess (pid = 1186) deen eise Kommando ausféiert:

// worker node
$ ps -afx
...
 1186 ?        Ssl    0:55 /usr/bin/dockerd -H fd://
17784 ?        Sl     0:00      _ docker-containerd-shim 53a0a08547b2f95986402d7f3b3e78702516244df049ba6c5aa012e81264aa3c /var/run/docker/libcontainerd/53a0a08547b2f95986402d7f3
17801 pts/2    Ss     0:00          _ sh
17827 pts/2    S+     0:00              _ sleep 5000
...

4. Aktivitéit am Container Runtime

Loosst eis de CRI-O Quellcode ënnersichen fir ze verstoen wat lass ass. Am Docker ass d'Logik ähnlech.

Et gëtt e Server verantwortlech fir d'Ëmsetzung RuntimeServiceServer:

// Server implements the RuntimeService and ImageService
type Server struct {
        config          libconfig.Config
        seccompProfile  *seccomp.Seccomp
        stream          StreamService
        netPlugin       ocicni.CNIPlugin
        hostportManager hostport.HostPortManager

        appArmorProfile string
        hostIP          string
        bindAddress     string

        *lib.ContainerServer
        monitorsChan      chan struct{}
        defaultIDMappings *idtools.IDMappings
        systemContext     *types.SystemContext // Never nil

        updateLock sync.RWMutex

        seccompEnabled  bool
        appArmorEnabled bool
}

(cri-o/server/server.go)

// Exec prepares a streaming endpoint to execute a command in the container.
func (s *Server) Exec(ctx context.Context, req *pb.ExecRequest) (resp *pb.ExecResponse, err error) {
        const operation = "exec"
        defer func() {
                recordOperation(operation, time.Now())
                recordError(operation, err)
        }()

        resp, err = s.getExec(req)
        if err != nil {
                return nil, fmt.Errorf("unable to prepare exec endpoint: %v", err)
        }

        return resp, nil
}

(cri-o/erver/container_exec.go)

Um Enn vun der Kette fiert de Container Runtime de Kommando op den Aarbechter Node aus:

// ExecContainer prepares a streaming endpoint to execute a command in the container.
func (r *runtimeOCI) ExecContainer(c *Container, cmd []string, stdin io.Reader, stdout, stderr io.WriteCloser, tty bool, resize <-chan remotecommand.TerminalSize) error {
        processFile, err := prepareProcessExec(c, cmd, tty)
        if err != nil {
                return err
        }
        defer os.RemoveAll(processFile.Name())

        args := []string{rootFlag, r.root, "exec"}
        args = append(args, "--process", processFile.Name(), c.ID())
        execCmd := exec.Command(r.path, args...)
        if v, found := os.LookupEnv("XDG_RUNTIME_DIR"); found {
                execCmd.Env = append(execCmd.Env, fmt.Sprintf("XDG_RUNTIME_DIR=%s", v))
        }
        var cmdErr, copyError error
        if tty {
                cmdErr = ttyCmd(execCmd, stdin, stdout, resize)
        } else {
                if stdin != nil {
                        // Use an os.Pipe here as it returns true *os.File objects.
                        // This way, if you run 'kubectl exec <pod> -i bash' (no tty) and type 'exit',
                        // the call below to execCmd.Run() can unblock because its Stdin is the read half
                        // of the pipe.
                        r, w, err := os.Pipe()
                        if err != nil {
                                return err
                        }
                        go func() { _, copyError = pools.Copy(w, stdin) }()

                        execCmd.Stdin = r
                }
                if stdout != nil {
                        execCmd.Stdout = stdout
                }
                if stderr != nil {
                        execCmd.Stderr = stderr
                }

                cmdErr = execCmd.Run()
        }

        if copyError != nil {
                return copyError
        }
        if exitErr, ok := cmdErr.(*exec.ExitError); ok {
                return &utilexec.ExitErrorWrapper{ExitError: exitErr}
        }
        return cmdErr
}

(cri-o/intern/oci/runtime_oci.go)

Wéi funktionnéiert kubectl exec?

Schlussendlech fiert de Kernel d'Befehle aus:

Wéi funktionnéiert kubectl exec?

Erënnerungen

  • API Server kann och eng Verbindung mat der Kubelet initialiséieren.
  • Déi folgend Verbindunge bestoe bis déi interaktiv Exekutiv Sessioun eriwwer ass:
    • tëscht kubectl an api-Server;
    • tëscht api-Server an kubectl;
    • tëscht dem Kubelet an dem Container Runtime.
  • Kubectl oder API-Server kann näischt op Aarbechtsknäppchen lafen. De Kubelet ka lafen, awer et interagéiert och mat der Container Runtime fir dës Saachen ze maachen.

Ressourcen

PS vum Iwwersetzer

Liest och op eisem Blog:

Source: will.com

Setzt e Commentaire