Kubectl exec ua haujlwm li cas?

Nco tseg. txhais.: tus sau tsab xov xwm - Erkan Erol, tus kws tshaj lij los ntawm SAP - qhia nws txoj kev kawm txog cov txheej txheem ntawm pab pawg ua haujlwm kubectl exec, yog li paub txog txhua tus neeg ua haujlwm nrog Kubernetes. Nws nrog tag nrho cov algorithm nrog cov npe ntawm Kubernetes qhov chaws (thiab cov haujlwm muaj feem xyuam), uas tso cai rau koj nkag siab cov ntsiab lus tob npaum li qhov xav tau.

Kubectl exec ua haujlwm li cas?

Muaj ib hnub Friday, ib tug npoj yaig tuaj rau kuv thiab nug seb yuav ua li cas thiaj ua tau ib qho kev hais kom ua hauv ib lub pod siv neeg-mus. Kuv tsis tuaj yeem teb nws thiab tam sim ntawd pom tau tias kuv tsis paub dab tsi txog lub tshuab ua haujlwm kubectl exec. Yog lawm, kuv muaj qee cov tswv yim hais txog nws cov qauv, tab sis kuv tsis yog 100% tseeb ntawm lawv qhov tseeb thiab yog li txiav txim siab los daws qhov teeb meem no. Tau kawm blogs, cov ntaub ntawv thiab cov cai, kuv tau kawm ntau yam tshiab, thiab hauv tsab xov xwm no kuv xav qhia kuv qhov kev tshawb pom thiab kev nkag siab. Yog tias muaj dab tsi tsis raug, thov hu rau kuv ntawm Twitter.

Kev cob qhia

Txhawm rau tsim ib pawg ntawm MacBook, kuv cloned ecomm-integration-ballerina/kubernetes-cluster. Tom qab ntawd kuv tau kho qhov chaw nyob IP ntawm cov nodes hauv kubelet config, txij li lub neej ntawd tsis tso cai. kubectl exec. Koj tuaj yeem nyeem ntxiv txog qhov laj thawj tseem ceeb ntawm qhov no no.

  • Txhua lub tsheb = kuv MacBook
  • Master node IP = 192.168.205.10
  • Tus neeg ua haujlwm ntawm tus IP = 192.168.205.11
  • API server chaw nres nkoj = 6443

Cheebtsam

Kubectl exec ua haujlwm li cas?

  • kubectl exec process: Thaum peb ua "kubectl exec..." cov txheej txheem pib. Qhov no tuaj yeem ua tiav ntawm txhua lub tshuab nrog kev nkag mus rau K8s API server. Nco tseg transl.: Ntxiv rau hauv cov npe console, tus sau siv cov lus "txhua lub tshuab", txhais tau hais tias cov lus txib tom ntej tuaj yeem ua tiav ntawm txhua lub tshuab xws li kev nkag mus rau Kubernetes.
  • api server: Ib qho kev tivthaiv ntawm tus tswv node uas muab kev nkag mus rau Kubernetes API. Qhov no yog frontend rau lub dav hlau tswj hauv Kubernetes.
  • kub kub: Tus neeg sawv cev uas khiav ntawm txhua qhov ntawm cov pawg. Nws ua kom muaj kev ua haujlwm ntawm cov ntim khoom hauv lub plhaub.
  • thawv runtime (container runtime): Lub software lub luag hauj lwm rau khiav ntim. Piv txwv li: Docker, CRI-O, ntim…
  • Ntsiav: OS kernel ntawm tus neeg ua haujlwm node; yog lub luag haujlwm rau kev tswj cov txheej txheem.
  • lub hom phiaj (lub hom phiaj) thawv: ib lub thawv uas yog ib feem ntawm lub plhaub taum thiab khiav ntawm ib qho ntawm cov neeg ua haujlwm.

Qhov kuv nrhiav tau

1. Cov neeg siv khoom sab nraud

Tsim ib lub pob hauv ib lub npe default:

// any machine
$ kubectl run exec-test-nginx --image=nginx

Tom qab ntawd peb ua tiav cov lus txib exec thiab tos 5000 vib nas this rau kev soj ntsuam ntxiv:

// any machine
$ kubectl exec -it exec-test-nginx-6558988d5-fgxgg -- sh
# sleep 5000

Cov txheej txheem kubectl tshwm sim (nrog pid = 8507 hauv peb rooj plaub):

// any machine
$ ps -ef |grep kubectl
501  8507  8409   0  7:19PM ttys000    0:00.13 kubectl exec -it exec-test-nginx-6558988d5-fgxgg -- sh

Yog tias peb txheeb xyuas cov haujlwm hauv network ntawm cov txheej txheem, peb yuav pom tias nws muaj kev sib txuas rau api-server (192.168.205.10.6443):

// any machine
$ netstat -atnv |grep 8507
tcp4       0      0  192.168.205.1.51673    192.168.205.10.6443    ESTABLISHED 131072 131768   8507      0 0x0102 0x00000020
tcp4       0      0  192.168.205.1.51672    192.168.205.10.6443    ESTABLISHED 131072 131768   8507      0 0x0102 0x00000028

Cia peb saib cov cai. Kubectl tsim ib daim ntawv thov POST nrog cov exec subresource thiab xa REST thov:

              req := restClient.Post().
                        Resource("pods").
                        Name(pod.Name).
                        Namespace(pod.Namespace).
                        SubResource("exec")
                req.VersionedParams(&corev1.PodExecOptions{
                        Container: containerName,
                        Command:   p.Command,
                        Stdin:     p.Stdin,
                        Stdout:    p.Out != nil,
                        Stderr:    p.ErrOut != nil,
                        TTY:       t.Raw,
                }, scheme.ParameterCodec)

                return p.Executor.Execute("POST", req.URL(), p.Config, p.In, p.Out, p.ErrOut, t.Raw, sizeQueue)

(kubectl/pkg/cmd/exec/exec.go)

Kubectl exec ua haujlwm li cas?

2. Kev ua ub no ntawm tus tswv ntawm sab

Peb kuj tuaj yeem soj ntsuam qhov kev thov ntawm api-server sab:

handler.go:143] kube-apiserver: POST "/api/v1/namespaces/default/pods/exec-test-nginx-6558988d5-fgxgg/exec" satisfied by gorestful with webservice /api/v1
upgradeaware.go:261] Connecting to backend proxy (intercepting redirects) https://192.168.205.11:10250/exec/default/exec-test-nginx-6558988d5-fgxgg/exec-test-nginx?command=sh&input=1&output=1&tty=1
Headers: map[Connection:[Upgrade] Content-Length:[0] Upgrade:[SPDY/3.1] User-Agent:[kubectl/v1.12.10 (darwin/amd64) kubernetes/e3c1340] X-Forwarded-For:[192.168.205.1] X-Stream-Protocol-Version:[v4.channel.k8s.io v3.channel.k8s.io v2.channel.k8s.io channel.k8s.io]]

Nco ntsoov tias HTTP thov suav nrog kev thov kom hloov pauv raws tu qauv. KEV TXAUS SIAB tso cai rau koj mus multiplex ib tus neeg stdin/stdout/stderr/spdy-yuam kev "kwj" dhau ib qho kev sib txuas TCP.

API server tau txais qhov kev thov thiab hloov nws mus rau hauv PodExecOptions:

// PodExecOptions is the query options to a Pod's remote exec call
type PodExecOptions struct {
        metav1.TypeMeta

        // Stdin if true indicates that stdin is to be redirected for the exec call
        Stdin bool

        // Stdout if true indicates that stdout is to be redirected for the exec call
        Stdout bool

        // Stderr if true indicates that stderr is to be redirected for the exec call
        Stderr bool

        // TTY if true indicates that a tty will be allocated for the exec call
        TTY bool

        // Container in which to execute the command.
        Container string

        // Command is the remote command to execute; argv array; not executed within a shell.
        Command []string
}

(pkg/apis/core/types.go)

Txhawm rau ua qhov yuav tsum tau ua, api-neeg rau zaub mov yuav tsum paub lub pod twg nws xav tau hu rau:

// ExecLocation returns the exec URL for a pod container. If opts.Container is blank
// and only one container is present in the pod, that container is used.
func ExecLocation(
        getter ResourceGetter,
        connInfo client.ConnectionInfoGetter,
        ctx context.Context,
        name string,
        opts *api.PodExecOptions,
) (*url.URL, http.RoundTripper, error) {
        return streamLocation(getter, connInfo, ctx, name, opts, opts.Container, "exec")
}

(pkg/registry/core/pod/strategy.go)

Tau kawg, cov ntaub ntawv hais txog qhov kawg yog muab los ntawm cov ntaub ntawv hais txog qhov node:

        nodeName := types.NodeName(pod.Spec.NodeName)
        if len(nodeName) == 0 {
                // If pod has not been assigned a host, return an empty location
                return nil, nil, errors.NewBadRequest(fmt.Sprintf("pod %s does not have a host assigned", name))
        }
        nodeInfo, err := connInfo.GetConnectionInfo(ctx, nodeName)

(pkg/registry/core/pod/strategy.go)

Hooray! Lub kubelet tam sim no muaj qhov chaw nres nkoj (node.Status.DaemonEndpoints.KubeletEndpoint.Port), uas API server tuaj yeem txuas tau:

// GetConnectionInfo retrieves connection info from the status of a Node API object.
func (k *NodeConnectionInfoGetter) GetConnectionInfo(ctx context.Context, nodeName types.NodeName) (*ConnectionInfo, error) {
        node, err := k.nodes.Get(ctx, string(nodeName), metav1.GetOptions{})
        if err != nil {
                return nil, err
        }

        // Find a kubelet-reported address, using preferred address type
        host, err := nodeutil.GetPreferredNodeAddress(node, k.preferredAddressTypes)
        if err != nil {
                return nil, err
        }

        // Use the kubelet-reported port, if present
        port := int(node.Status.DaemonEndpoints.KubeletEndpoint.Port)
        if port <= 0 {
                port = k.defaultPort
        }

        return &ConnectionInfo{
                Scheme:    k.scheme,
                Hostname:  host,
                Port:      strconv.Itoa(port),
                Transport: k.transport,
        }, nil
}

(pkg/kubelet/client/kubelet_client.go)

Los ntawm cov ntaub ntawv Master-Node kev sib txuas lus> Master rau pawg> apiserver rau kubelet:

Cov kev sib txuas no tau ua rau kubelet's HTTPS kawg. Los ntawm lub neej ntawd, apiserver tsis txheeb xyuas qhov kubelet daim ntawv pov thawj, uas ua rau kev sib txuas yooj yim rau txiv neej-hauv-tus-nruab nrab (MITM) tawm tsam thiab tsis nyab xeeb rau kev ua hauj lwm hauv kev tsis ntseeg thiab / lossis kev sib koom tes hauv pej xeem.

Tam sim no API server paub qhov kawg thiab tsim kev sib txuas:

// Connect returns a handler for the pod exec proxy
func (r *ExecREST) Connect(ctx context.Context, name string, opts runtime.Object, responder rest.Responder) (http.Handler, error) {
        execOpts, ok := opts.(*api.PodExecOptions)
        if !ok {
                return nil, fmt.Errorf("invalid options object: %#v", opts)
        }
        location, transport, err := pod.ExecLocation(r.Store, r.KubeletConn, ctx, name, execOpts)
        if err != nil {
                return nil, err
        }
        return newThrottledUpgradeAwareProxyHandler(location, transport, false, true, true, responder), nil
}

(pkg/registry/core/pod/rest/subresources.go)

Cia peb pom dab tsi tshwm sim ntawm tus tswv ntawm lub npe.

Ua ntej, peb pom tus IP ntawm tus neeg ua haujlwm node. Hauv peb qhov xwm txheej nws yog 192.168.205.11:

// any machine
$ kubectl get nodes k8s-node-1 -o wide
NAME         STATUS   ROLES    AGE   VERSION   INTERNAL-IP      EXTERNAL-IP   OS-IMAGE             KERNEL-VERSION      CONTAINER-RUNTIME
k8s-node-1   Ready    <none>   9h    v1.15.3   192.168.205.11   <none>        Ubuntu 16.04.6 LTS   4.4.0-159-generic   docker://17.3.3

Ces teem lub kubelet chaw nres nkoj (10250 nyob rau hauv peb rooj plaub):

// any machine
$ kubectl get nodes k8s-node-1 -o jsonpath='{.status.daemonEndpoints.kubeletEndpoint}'
map[Port:10250]

Tam sim no nws yog lub sij hawm los xyuas lub network. Puas muaj kev sib txuas rau tus neeg ua haujlwm node (192.168.205.11)? Nws yog! Yog koj tua ib tug txheej txheem exec, nws yuav ploj mus, yog li kuv paub tias kev sib txuas tau tsim los ntawm api-server raws li qhov tshwm sim ntawm exec hais kom ua.

// master node
$ netstat -atn |grep 192.168.205.11
tcp        0      0 192.168.205.10:37870    192.168.205.11:10250    ESTABLISHED
…

Kubectl exec ua haujlwm li cas?

Kev sib txuas ntawm kubectl thiab api-server tseem qhib. Tsis tas li ntawd, muaj lwm qhov kev sib txuas uas txuas api-server thiab kubelet.

3. Kev ua ub no ntawm tus neeg ua haujlwm

Tam sim no cia peb txuas mus rau tus neeg ua haujlwm node thiab pom tias muaj dab tsi tshwm sim ntawm nws.

Ua ntej ntawm tag nrho cov, peb pom tias kev sib txuas rau nws kuj yog tsim (kab thib ob); 192.168.205.10 yog tus IP ntawm tus tswv node:

 // worker node
  $ netstat -atn |grep 10250
  tcp6       0      0 :::10250                :::*                    LISTEN
  tcp6       0      0 192.168.205.11:10250    192.168.205.10:37870    ESTABLISHED

Yuav ua li cas txog peb pab neeg sleep? Hurray, nws nyob ntawd thiab!

 // worker node
  $ ps -afx
  ...
  31463 ?        Sl     0:00      _ docker-containerd-shim 7d974065bbb3107074ce31c51f5ef40aea8dcd535ae11a7b8f2dd180b8ed583a /var/run/docker/libcontainerd/7d974065bbb3107074ce31c51
  31478 pts/0    Ss     0:00          _ sh
  31485 pts/0    S+     0:00              _ sleep 5000
  …

Tab sis tos: kubelet tau rub qhov no li cas? Kubelet muaj lub daemon uas muab kev nkag mus rau API los ntawm qhov chaw nres nkoj rau api-server thov:

// Server is the library interface to serve the stream requests.
type Server interface {
        http.Handler

        // Get the serving URL for the requests.
        // Requests must not be nil. Responses may be nil iff an error is returned.
        GetExec(*runtimeapi.ExecRequest) (*runtimeapi.ExecResponse, error)
        GetAttach(req *runtimeapi.AttachRequest) (*runtimeapi.AttachResponse, error)
        GetPortForward(*runtimeapi.PortForwardRequest) (*runtimeapi.PortForwardResponse, error)

        // Start the server.
        // addr is the address to serve on (address:port) stayUp indicates whether the server should
        // listen until Stop() is called, or automatically stop after all expected connections are
        // closed. Calling Get{Exec,Attach,PortForward} increments the expected connection count.
        // Function does not return until the server is stopped.
        Start(stayUp bool) error
        // Stop the server, and terminate any open connections.
        Stop() error
}

(pkg/kubelet/server/streaming/server.go)

Kubelet suav cov lus teb kawg rau kev thov exec:

func (s *server) GetExec(req *runtimeapi.ExecRequest) (*runtimeapi.ExecResponse, error) {
        if err := validateExecRequest(req); err != nil {
                return nil, err
        }
        token, err := s.cache.Insert(req)
        if err != nil {
                return nil, err
        }
        return &runtimeapi.ExecResponse{
                Url: s.buildURL("exec", token),
        }, nil
}

(pkg/kubelet/server/streaming/server.go)

Tsis txhob ntxhov siab. Nws tsis rov qab qhov tshwm sim ntawm cov lus txib, tab sis qhov kawg ntawm kev sib txuas lus:

type ExecResponse struct {
        // Fully qualified URL of the exec streaming server.
        Url                  string   `protobuf:"bytes,1,opt,name=url,proto3" json:"url,omitempty"`
        XXX_NoUnkeyedLiteral struct{} `json:"-"`
        XXX_sizecache        int32    `json:"-"`
}

(cri-api/pkg/apis/runtime/v1alpha2/api.pb.go)

Kubelet siv lub interface RuntimeServiceClient, uas yog ib feem ntawm Container Runtime Interface (peb sau ntxiv txog nws, piv txwv li, no - kwv yees. txhais.):

Cov npe ntev los ntawm cri-api hauv kubernetes/kubernetes

// For semantics around ctx use and closing/ending streaming RPCs, please refer to https://godoc.org/google.golang.org/grpc#ClientConn.NewStream.
type RuntimeServiceClient interface {
        // Version returns the runtime name, runtime version, and runtime API version.
        Version(ctx context.Context, in *VersionRequest, opts ...grpc.CallOption) (*VersionResponse, error)
        // RunPodSandbox creates and starts a pod-level sandbox. Runtimes must ensure
        // the sandbox is in the ready state on success.
        RunPodSandbox(ctx context.Context, in *RunPodSandboxRequest, opts ...grpc.CallOption) (*RunPodSandboxResponse, error)
        // StopPodSandbox stops any running process that is part of the sandbox and
        // reclaims network resources (e.g., IP addresses) allocated to the sandbox.
        // If there are any running containers in the sandbox, they must be forcibly
        // terminated.
        // This call is idempotent, and must not return an error if all relevant
        // resources have already been reclaimed. kubelet will call StopPodSandbox
        // at least once before calling RemovePodSandbox. It will also attempt to
        // reclaim resources eagerly, as soon as a sandbox is not needed. Hence,
        // multiple StopPodSandbox calls are expected.
        StopPodSandbox(ctx context.Context, in *StopPodSandboxRequest, opts ...grpc.CallOption) (*StopPodSandboxResponse, error)
        // RemovePodSandbox removes the sandbox. If there are any running containers
        // in the sandbox, they must be forcibly terminated and removed.
        // This call is idempotent, and must not return an error if the sandbox has
        // already been removed.
        RemovePodSandbox(ctx context.Context, in *RemovePodSandboxRequest, opts ...grpc.CallOption) (*RemovePodSandboxResponse, error)
        // PodSandboxStatus returns the status of the PodSandbox. If the PodSandbox is not
        // present, returns an error.
        PodSandboxStatus(ctx context.Context, in *PodSandboxStatusRequest, opts ...grpc.CallOption) (*PodSandboxStatusResponse, error)
        // ListPodSandbox returns a list of PodSandboxes.
        ListPodSandbox(ctx context.Context, in *ListPodSandboxRequest, opts ...grpc.CallOption) (*ListPodSandboxResponse, error)
        // CreateContainer creates a new container in specified PodSandbox
        CreateContainer(ctx context.Context, in *CreateContainerRequest, opts ...grpc.CallOption) (*CreateContainerResponse, error)
        // StartContainer starts the container.
        StartContainer(ctx context.Context, in *StartContainerRequest, opts ...grpc.CallOption) (*StartContainerResponse, error)
        // StopContainer stops a running container with a grace period (i.e., timeout).
        // This call is idempotent, and must not return an error if the container has
        // already been stopped.
        // TODO: what must the runtime do after the grace period is reached?
        StopContainer(ctx context.Context, in *StopContainerRequest, opts ...grpc.CallOption) (*StopContainerResponse, error)
        // RemoveContainer removes the container. If the container is running, the
        // container must be forcibly removed.
        // This call is idempotent, and must not return an error if the container has
        // already been removed.
        RemoveContainer(ctx context.Context, in *RemoveContainerRequest, opts ...grpc.CallOption) (*RemoveContainerResponse, error)
        // ListContainers lists all containers by filters.
        ListContainers(ctx context.Context, in *ListContainersRequest, opts ...grpc.CallOption) (*ListContainersResponse, error)
        // ContainerStatus returns status of the container. If the container is not
        // present, returns an error.
        ContainerStatus(ctx context.Context, in *ContainerStatusRequest, opts ...grpc.CallOption) (*ContainerStatusResponse, error)
        // UpdateContainerResources updates ContainerConfig of the container.
        UpdateContainerResources(ctx context.Context, in *UpdateContainerResourcesRequest, opts ...grpc.CallOption) (*UpdateContainerResourcesResponse, error)
        // ReopenContainerLog asks runtime to reopen the stdout/stderr log file
        // for the container. This is often called after the log file has been
        // rotated. If the container is not running, container runtime can choose
        // to either create a new log file and return nil, or return an error.
        // Once it returns error, new container log file MUST NOT be created.
        ReopenContainerLog(ctx context.Context, in *ReopenContainerLogRequest, opts ...grpc.CallOption) (*ReopenContainerLogResponse, error)
        // ExecSync runs a command in a container synchronously.
        ExecSync(ctx context.Context, in *ExecSyncRequest, opts ...grpc.CallOption) (*ExecSyncResponse, error)
        // Exec prepares a streaming endpoint to execute a command in the container.
        Exec(ctx context.Context, in *ExecRequest, opts ...grpc.CallOption) (*ExecResponse, error)
        // Attach prepares a streaming endpoint to attach to a running container.
        Attach(ctx context.Context, in *AttachRequest, opts ...grpc.CallOption) (*AttachResponse, error)
        // PortForward prepares a streaming endpoint to forward ports from a PodSandbox.
        PortForward(ctx context.Context, in *PortForwardRequest, opts ...grpc.CallOption) (*PortForwardResponse, error)
        // ContainerStats returns stats of the container. If the container does not
        // exist, the call returns an error.
        ContainerStats(ctx context.Context, in *ContainerStatsRequest, opts ...grpc.CallOption) (*ContainerStatsResponse, error)
        // ListContainerStats returns stats of all running containers.
        ListContainerStats(ctx context.Context, in *ListContainerStatsRequest, opts ...grpc.CallOption) (*ListContainerStatsResponse, error)
        // UpdateRuntimeConfig updates the runtime configuration based on the given request.
        UpdateRuntimeConfig(ctx context.Context, in *UpdateRuntimeConfigRequest, opts ...grpc.CallOption) (*UpdateRuntimeConfigResponse, error)
        // Status returns the status of the runtime.
        Status(ctx context.Context, in *StatusRequest, opts ...grpc.CallOption) (*StatusResponse, error)
}

(cri-api/pkg/apis/runtime/v1alpha2/api.pb.go)
Nws tsuas yog siv gRPC hu rau ib txoj hauv kev los ntawm Container Runtime Interface:

type runtimeServiceClient struct {
        cc *grpc.ClientConn
}

(cri-api/pkg/apis/runtime/v1alpha2/api.pb.go)

func (c *runtimeServiceClient) Exec(ctx context.Context, in *ExecRequest, opts ...grpc.CallOption) (*ExecResponse, error) {
        out := new(ExecResponse)
        err := c.cc.Invoke(ctx, "/runtime.v1alpha2.RuntimeService/Exec", in, out, opts...)
        if err != nil {
                return nil, err
        }
        return out, nil
}

(cri-api/pkg/apis/runtime/v1alpha2/api.pb.go)

Container Runtime yog lub luag haujlwm rau kev siv RuntimeServiceServer:

Cov npe ntev los ntawm cri-api hauv kubernetes/kubernetes

// RuntimeServiceServer is the server API for RuntimeService service.
type RuntimeServiceServer interface {
        // Version returns the runtime name, runtime version, and runtime API version.
        Version(context.Context, *VersionRequest) (*VersionResponse, error)
        // RunPodSandbox creates and starts a pod-level sandbox. Runtimes must ensure
        // the sandbox is in the ready state on success.
        RunPodSandbox(context.Context, *RunPodSandboxRequest) (*RunPodSandboxResponse, error)
        // StopPodSandbox stops any running process that is part of the sandbox and
        // reclaims network resources (e.g., IP addresses) allocated to the sandbox.
        // If there are any running containers in the sandbox, they must be forcibly
        // terminated.
        // This call is idempotent, and must not return an error if all relevant
        // resources have already been reclaimed. kubelet will call StopPodSandbox
        // at least once before calling RemovePodSandbox. It will also attempt to
        // reclaim resources eagerly, as soon as a sandbox is not needed. Hence,
        // multiple StopPodSandbox calls are expected.
        StopPodSandbox(context.Context, *StopPodSandboxRequest) (*StopPodSandboxResponse, error)
        // RemovePodSandbox removes the sandbox. If there are any running containers
        // in the sandbox, they must be forcibly terminated and removed.
        // This call is idempotent, and must not return an error if the sandbox has
        // already been removed.
        RemovePodSandbox(context.Context, *RemovePodSandboxRequest) (*RemovePodSandboxResponse, error)
        // PodSandboxStatus returns the status of the PodSandbox. If the PodSandbox is not
        // present, returns an error.
        PodSandboxStatus(context.Context, *PodSandboxStatusRequest) (*PodSandboxStatusResponse, error)
        // ListPodSandbox returns a list of PodSandboxes.
        ListPodSandbox(context.Context, *ListPodSandboxRequest) (*ListPodSandboxResponse, error)
        // CreateContainer creates a new container in specified PodSandbox
        CreateContainer(context.Context, *CreateContainerRequest) (*CreateContainerResponse, error)
        // StartContainer starts the container.
        StartContainer(context.Context, *StartContainerRequest) (*StartContainerResponse, error)
        // StopContainer stops a running container with a grace period (i.e., timeout).
        // This call is idempotent, and must not return an error if the container has
        // already been stopped.
        // TODO: what must the runtime do after the grace period is reached?
        StopContainer(context.Context, *StopContainerRequest) (*StopContainerResponse, error)
        // RemoveContainer removes the container. If the container is running, the
        // container must be forcibly removed.
        // This call is idempotent, and must not return an error if the container has
        // already been removed.
        RemoveContainer(context.Context, *RemoveContainerRequest) (*RemoveContainerResponse, error)
        // ListContainers lists all containers by filters.
        ListContainers(context.Context, *ListContainersRequest) (*ListContainersResponse, error)
        // ContainerStatus returns status of the container. If the container is not
        // present, returns an error.
        ContainerStatus(context.Context, *ContainerStatusRequest) (*ContainerStatusResponse, error)
        // UpdateContainerResources updates ContainerConfig of the container.
        UpdateContainerResources(context.Context, *UpdateContainerResourcesRequest) (*UpdateContainerResourcesResponse, error)
        // ReopenContainerLog asks runtime to reopen the stdout/stderr log file
        // for the container. This is often called after the log file has been
        // rotated. If the container is not running, container runtime can choose
        // to either create a new log file and return nil, or return an error.
        // Once it returns error, new container log file MUST NOT be created.
        ReopenContainerLog(context.Context, *ReopenContainerLogRequest) (*ReopenContainerLogResponse, error)
        // ExecSync runs a command in a container synchronously.
        ExecSync(context.Context, *ExecSyncRequest) (*ExecSyncResponse, error)
        // Exec prepares a streaming endpoint to execute a command in the container.
        Exec(context.Context, *ExecRequest) (*ExecResponse, error)
        // Attach prepares a streaming endpoint to attach to a running container.
        Attach(context.Context, *AttachRequest) (*AttachResponse, error)
        // PortForward prepares a streaming endpoint to forward ports from a PodSandbox.
        PortForward(context.Context, *PortForwardRequest) (*PortForwardResponse, error)
        // ContainerStats returns stats of the container. If the container does not
        // exist, the call returns an error.
        ContainerStats(context.Context, *ContainerStatsRequest) (*ContainerStatsResponse, error)
        // ListContainerStats returns stats of all running containers.
        ListContainerStats(context.Context, *ListContainerStatsRequest) (*ListContainerStatsResponse, error)
        // UpdateRuntimeConfig updates the runtime configuration based on the given request.
        UpdateRuntimeConfig(context.Context, *UpdateRuntimeConfigRequest) (*UpdateRuntimeConfigResponse, error)
        // Status returns the status of the runtime.
        Status(context.Context, *StatusRequest) (*StatusResponse, error)
}

(cri-api/pkg/apis/runtime/v1alpha2/api.pb.go)
Kubectl exec ua haujlwm li cas?

Yog tias muaj, peb yuav tsum pom kev sib txuas ntawm kubelet thiab lub thawv ntim khoom, txoj cai? Cia peb kuaj.

Khiav cov lus txib no ua ntej thiab tom qab cov lus txib exec thiab saib qhov sib txawv. Hauv kuv qhov xwm txheej qhov txawv yog:

// worker node
$ ss -a -p |grep kubelet
...
u_str  ESTAB      0      0       * 157937                * 157387                users:(("kubelet",pid=5714,fd=33))
...

Hmmm... Kev sib txuas tshiab ntawm unix sockets ntawm kubelet (pid=5714) thiab qee yam tsis paub. Nws yuav ua li cas? Yog lawm, nws yog Docker (pid=1186)!

// worker node
$ ss -a -p |grep 157387
...
u_str  ESTAB      0      0       * 157937                * 157387                users:(("kubelet",pid=5714,fd=33))
u_str  ESTAB      0      0      /var/run/docker.sock 157387                * 157937                users:(("dockerd",pid=1186,fd=14))
...

Raws li koj nco qab, qhov no yog cov txheej txheem docker daemon (pid = 1186) uas ua tiav peb cov lus txib:

// worker node
$ ps -afx
...
 1186 ?        Ssl    0:55 /usr/bin/dockerd -H fd://
17784 ?        Sl     0:00      _ docker-containerd-shim 53a0a08547b2f95986402d7f3b3e78702516244df049ba6c5aa012e81264aa3c /var/run/docker/libcontainerd/53a0a08547b2f95986402d7f3
17801 pts/2    Ss     0:00          _ sh
17827 pts/2    S+     0:00              _ sleep 5000
...

4. Kev ua ub no hauv lub thawv khiav lub sijhawm

Cia peb tshuaj xyuas CRI-O qhov chaws kom nkag siab tias muaj dab tsi tshwm sim. Hauv Docker lub logic zoo ib yam.

Nws muaj lub luag haujlwm rau kev siv RuntimeServiceServer:

// Server implements the RuntimeService and ImageService
type Server struct {
        config          libconfig.Config
        seccompProfile  *seccomp.Seccomp
        stream          StreamService
        netPlugin       ocicni.CNIPlugin
        hostportManager hostport.HostPortManager

        appArmorProfile string
        hostIP          string
        bindAddress     string

        *lib.ContainerServer
        monitorsChan      chan struct{}
        defaultIDMappings *idtools.IDMappings
        systemContext     *types.SystemContext // Never nil

        updateLock sync.RWMutex

        seccompEnabled  bool
        appArmorEnabled bool
}

(cri-o/server/server.go)

// Exec prepares a streaming endpoint to execute a command in the container.
func (s *Server) Exec(ctx context.Context, req *pb.ExecRequest) (resp *pb.ExecResponse, err error) {
        const operation = "exec"
        defer func() {
                recordOperation(operation, time.Now())
                recordError(operation, err)
        }()

        resp, err = s.getExec(req)
        if err != nil {
                return nil, fmt.Errorf("unable to prepare exec endpoint: %v", err)
        }

        return resp, nil
}

(cri-o/erver/container_exec.go)

Thaum kawg ntawm cov saw hlau, lub thawv runtime ua tiav cov lus txib ntawm tus neeg ua haujlwm node:

// ExecContainer prepares a streaming endpoint to execute a command in the container.
func (r *runtimeOCI) ExecContainer(c *Container, cmd []string, stdin io.Reader, stdout, stderr io.WriteCloser, tty bool, resize <-chan remotecommand.TerminalSize) error {
        processFile, err := prepareProcessExec(c, cmd, tty)
        if err != nil {
                return err
        }
        defer os.RemoveAll(processFile.Name())

        args := []string{rootFlag, r.root, "exec"}
        args = append(args, "--process", processFile.Name(), c.ID())
        execCmd := exec.Command(r.path, args...)
        if v, found := os.LookupEnv("XDG_RUNTIME_DIR"); found {
                execCmd.Env = append(execCmd.Env, fmt.Sprintf("XDG_RUNTIME_DIR=%s", v))
        }
        var cmdErr, copyError error
        if tty {
                cmdErr = ttyCmd(execCmd, stdin, stdout, resize)
        } else {
                if stdin != nil {
                        // Use an os.Pipe here as it returns true *os.File objects.
                        // This way, if you run 'kubectl exec <pod> -i bash' (no tty) and type 'exit',
                        // the call below to execCmd.Run() can unblock because its Stdin is the read half
                        // of the pipe.
                        r, w, err := os.Pipe()
                        if err != nil {
                                return err
                        }
                        go func() { _, copyError = pools.Copy(w, stdin) }()

                        execCmd.Stdin = r
                }
                if stdout != nil {
                        execCmd.Stdout = stdout
                }
                if stderr != nil {
                        execCmd.Stderr = stderr
                }

                cmdErr = execCmd.Run()
        }

        if copyError != nil {
                return copyError
        }
        if exitErr, ok := cmdErr.(*exec.ExitError); ok {
                return &utilexec.ExitErrorWrapper{ExitError: exitErr}
        }
        return cmdErr
}

(cri-o/internal/oci/runtime_oci.go)

Kubectl exec ua haujlwm li cas?

Thaum kawg, lub kernel ua tiav cov lus txib:

Kubectl exec ua haujlwm li cas?

Nco ntsoov

  • API Server tuaj yeem pib qhov kev sib txuas rau kubelet.
  • Cov kev sib txuas hauv qab no txuas ntxiv mus txog rau thaum qhov kev sib tham sib tham ua haujlwm xaus:
    • nruab nrab kubectl thiab api-server;
    • nruab nrab ntawm api-server thiab kubectl;
    • nruab nrab ntawm lub kubelet thiab lub thawv runtime.
  • Kubectl lossis api-server tsis tuaj yeem khiav ib yam dab tsi ntawm cov neeg ua haujlwm nodes. Lub Kubelet tuaj yeem khiav tau, tab sis nws kuj cuam tshuam nrog lub thawv lub sijhawm ua haujlwm.

Cov kev pab

PS los ntawm tus txhais lus

Nyeem kuj ntawm peb blog:

Tau qhov twg los: www.hab.com

Ntxiv ib saib