Khiav Apache Spark ntawm Kubernetes

Nyob zoo cov nyeem, nyob zoo tav su. Niaj hnub no peb yuav tham me ntsis txog Apache Spark thiab nws txoj kev vam meej.

Khiav Apache Spark ntawm Kubernetes

Hauv lub ntiaj teb niaj hnub no ntawm Cov Ntaub Ntawv Loj, Apache Spark yog tus qauv de facto rau kev tsim cov ntaub ntawv ua haujlwm ua haujlwm. Tsis tas li ntawd, nws kuj tseem siv los tsim cov ntawv thov streaming uas ua haujlwm hauv lub tswv yim micro batch, ua thiab xa cov ntaub ntawv hauv cov khoom me me (Spark Structured Streaming). Thiab kev lig kev cai nws tau yog ib feem ntawm tag nrho Hadoop pawg, siv YARN (los yog qee zaus Apache Mesos) ua tus thawj tswj hwm. Los ntawm 2020, nws siv nyob rau hauv nws cov tsoos daim ntawv yog nyob rau hauv nqe lus nug rau feem ntau tuam txhab uas muag vim qhov tsis zoo ntawm Hadoop distributions - txoj kev loj hlob ntawm HDP thiab CDH tau nres, CDH tsis zoo tsim thiab muaj nqi siab, thiab lwm Hadoop lwm tus neeg muaj. txawm tsis muaj los yog muaj lub neej yav tom ntej. Yog li ntawd, kev tshaj tawm Apache Spark siv Kubernetes yog qhov muaj kev txaus siab ntawm cov zej zog thiab cov tuam txhab loj - dhau los ua tus qauv hauv lub thawv orchestration thiab kev tswj hwm cov peev txheej hauv huab huab ntiag tug thiab pej xeem, nws daws qhov teeb meem nrog qhov tsis yooj yim ntawm kev teem sijhawm ntawm Spark cov haujlwm ntawm YARN thiab muab ib qho kev txhim kho tsis tu ncua nrog ntau lub lag luam thiab qhib kev faib khoom rau cov tuam txhab ntawm txhua qhov ntau thiab tsawg thiab kab txaij. Tsis tas li ntawd, nyob rau hauv lub wake ntawm muaj koob meej, feem ntau twb tau tswj kom tau ob peb lub installation ntawm lawv tus kheej thiab tau nce lawv cov kev txawj ntse nyob rau hauv nws siv, uas yooj yim rau kev khiav.

Pib nrog version 2.3.0, Apache Spark tau txais kev txhawb nqa rau kev ua haujlwm hauv Kubernetes pawg thiab hnub no, peb yuav tham txog qhov kev loj hlob tam sim no ntawm txoj hauv kev no, ntau yam kev xaiv rau nws siv thiab pitfalls uas yuav ntsib thaum siv.

Ua ntej tshaj plaws, cia peb saib cov txheej txheem ntawm kev tsim cov haujlwm thiab cov ntawv thov raws li Apache Spark thiab qhia txog cov xwm txheej uas koj yuav tsum tau ua haujlwm ntawm Kubernetes pawg. Hauv kev npaj cov ntawv tshaj tawm no, OpenShift yog siv los ua kev faib tawm thiab cov lus txib cuam tshuam rau nws cov kab hluav taws xob hais kom ua (oc) yuav muab. Rau lwm yam Kubernetes kev faib tawm, cov lus txib sib xws los ntawm tus qauv Kubernetes cov kab lus siv hluav taws xob (kubectl) lossis lawv cov analogues (piv txwv li, rau oc adm txoj cai) tuaj yeem siv.

Thawj zaug siv rooj plaub - spark-xa

Thaum lub sijhawm txhim kho cov haujlwm thiab cov ntawv thov, tus tsim tawm yuav tsum tau ua haujlwm kom debug cov ntaub ntawv hloov pauv. Raws li txoj cai, stubs tuaj yeem siv rau cov hom phiaj no, tab sis kev txhim kho nrog kev koom tes ntawm qhov tseeb (txawm tias qhov kev sim) ntawm cov tshuab kawg tau ua pov thawj kom nrawm dua thiab zoo dua hauv cov haujlwm no. Nyob rau hauv rooj plaub thaum peb debug ntawm qhov tseeb ntawm cov tshuab kawg, ob qhov xwm txheej tuaj yeem ua tau:

  • tus tsim tawm ua haujlwm Spark hauv zos hauv hom standalone;

    Khiav Apache Spark ntawm Kubernetes

  • tus tsim tawm ua haujlwm Spark ntawm Kubernetes pawg hauv kev sim ntsuas.

    Khiav Apache Spark ntawm Kubernetes

Thawj qhov kev xaiv muaj txoj cai kom muaj nyob, tab sis muaj ntau qhov tsis zoo:

  • Txhua tus tsim tawm yuav tsum tau muab nkag los ntawm qhov chaw ua haujlwm mus rau txhua qhov xwm txheej ntawm qhov kawg uas nws xav tau;
  • yuav tsum muaj cov peev txheej txaus ntawm lub tshuab ua haujlwm kom khiav haujlwm tau tsim.

Qhov kev xaiv thib ob tsis muaj qhov tsis zoo no, txij li kev siv Kubernetes pawg tso cai rau koj los faib cov peev txheej tsim nyog rau kev ua haujlwm thiab muab nws nrog qhov tsim nyog nkag mus rau qhov kawg kab ke, hloov pauv nkag mus rau nws siv Kubernetes tus qauv rau txhua tus tswv cuab ntawm pab pawg txhim kho. Cia peb qhia nws tias yog thawj qhov kev siv - pib ua haujlwm Spark los ntawm lub tshuab tsim tawm hauv zos ntawm Kubernetes pawg hauv kev sim ntsuas.

Cia peb tham ntxiv txog cov txheej txheem ntawm kev teeb tsa Spark kom khiav hauv zos. Txhawm rau pib siv Spark koj yuav tsum nruab nws:

mkdir /opt/spark
cd /opt/spark
wget http://mirror.linux-ia64.org/apache/spark/spark-2.4.5/spark-2.4.5.tgz
tar zxvf spark-2.4.5.tgz
rm -f spark-2.4.5.tgz

Peb sau cov pob tsim nyog rau kev ua haujlwm nrog Kubernetes:

cd spark-2.4.5/
./build/mvn -Pkubernetes -DskipTests clean package

Kev tsim tag nrho yuav siv sijhawm ntau, thiab los tsim cov duab Docker thiab khiav lawv ntawm Kubernetes pawg, koj tsuas yog xav tau cov ntaub ntawv ntim los ntawm "kev sib sau /" cov npe, yog li koj tsuas tuaj yeem tsim cov phiaj xwm no:

./build/mvn -f ./assembly/pom.xml -Pkubernetes -DskipTests clean package

Txhawm rau khiav Spark txoj haujlwm ntawm Kubernetes, koj yuav tsum tsim cov duab Docker siv los ua cov duab hauv paus. Muaj 2 txoj hauv kev ua tau ntawm no:

  • Cov duab Docker tsim muaj xws li kev ua haujlwm Spark code;
  • Cov duab tsim suav nrog tsuas yog Spark thiab qhov kev vam khom tsim nyog, cov lej ua tau zoo yog tuav nyob deb (piv txwv li, hauv HDFS).

Ua ntej, cia peb tsim cov duab Docker uas muaj qhov piv txwv ntawm kev ua haujlwm Spark. Txhawm rau tsim cov duab Docker, Spark muaj cov khoom siv hu ua "docker-image-tool". Cia peb kawm txog kev pab ntawm nws:

./bin/docker-image-tool.sh --help

Nrog nws cov kev pab, koj tuaj yeem tsim Docker dluab thiab xa lawv mus rau cov chaw teev npe nyob deb, tab sis los ntawm lub neej ntawd nws muaj ntau qhov tsis zoo:

  • yam tsis poob tsim 3 Docker dluab ib zaug - rau Spark, PySpark thiab R;
  • tsis tso cai rau koj qhia lub npe duab.

Yog li ntawd, peb yuav siv ib tug hloov kho version ntawm cov nqi hluav taws xob muab hauv qab no:

vi bin/docker-image-tool-upd.sh

#!/usr/bin/env bash

function error {
  echo "$@" 1>&2
  exit 1
}

if [ -z "${SPARK_HOME}" ]; then
  SPARK_HOME="$(cd "`dirname "$0"`"/..; pwd)"
fi
. "${SPARK_HOME}/bin/load-spark-env.sh"

function image_ref {
  local image="$1"
  local add_repo="${2:-1}"
  if [ $add_repo = 1 ] && [ -n "$REPO" ]; then
    image="$REPO/$image"
  fi
  if [ -n "$TAG" ]; then
    image="$image:$TAG"
  fi
  echo "$image"
}

function build {
  local BUILD_ARGS
  local IMG_PATH

  if [ ! -f "$SPARK_HOME/RELEASE" ]; then
    IMG_PATH=$BASEDOCKERFILE
    BUILD_ARGS=(
      ${BUILD_PARAMS}
      --build-arg
      img_path=$IMG_PATH
      --build-arg
      datagram_jars=datagram/runtimelibs
      --build-arg
      spark_jars=assembly/target/scala-$SPARK_SCALA_VERSION/jars
    )
  else
    IMG_PATH="kubernetes/dockerfiles"
    BUILD_ARGS=(${BUILD_PARAMS})
  fi

  if [ -z "$IMG_PATH" ]; then
    error "Cannot find docker image. This script must be run from a runnable distribution of Apache Spark."
  fi

  if [ -z "$IMAGE_REF" ]; then
    error "Cannot find docker image reference. Please add -i arg."
  fi

  local BINDING_BUILD_ARGS=(
    ${BUILD_PARAMS}
    --build-arg
    base_img=$(image_ref $IMAGE_REF)
  )
  local BASEDOCKERFILE=${BASEDOCKERFILE:-"$IMG_PATH/spark/docker/Dockerfile"}

  docker build $NOCACHEARG "${BUILD_ARGS[@]}" 
    -t $(image_ref $IMAGE_REF) 
    -f "$BASEDOCKERFILE" .
}

function push {
  docker push "$(image_ref $IMAGE_REF)"
}

function usage {
  cat <<EOF
Usage: $0 [options] [command]
Builds or pushes the built-in Spark Docker image.

Commands:
  build       Build image. Requires a repository address to be provided if the image will be
              pushed to a different registry.
  push        Push a pre-built image to a registry. Requires a repository address to be provided.

Options:
  -f file               Dockerfile to build for JVM based Jobs. By default builds the Dockerfile shipped with Spark.
  -p file               Dockerfile to build for PySpark Jobs. Builds Python dependencies and ships with Spark.
  -R file               Dockerfile to build for SparkR Jobs. Builds R dependencies and ships with Spark.
  -r repo               Repository address.
  -i name               Image name to apply to the built image, or to identify the image to be pushed.  
  -t tag                Tag to apply to the built image, or to identify the image to be pushed.
  -m                    Use minikube's Docker daemon.
  -n                    Build docker image with --no-cache
  -b arg      Build arg to build or push the image. For multiple build args, this option needs to
              be used separately for each build arg.

Using minikube when building images will do so directly into minikube's Docker daemon.
There is no need to push the images into minikube in that case, they'll be automatically
available when running applications inside the minikube cluster.

Check the following documentation for more information on using the minikube Docker daemon:

  https://kubernetes.io/docs/getting-started-guides/minikube/#reusing-the-docker-daemon

Examples:
  - Build image in minikube with tag "testing"
    $0 -m -t testing build

  - Build and push image with tag "v2.3.0" to docker.io/myrepo
    $0 -r docker.io/myrepo -t v2.3.0 build
    $0 -r docker.io/myrepo -t v2.3.0 push
EOF
}

if [[ "$@" = *--help ]] || [[ "$@" = *-h ]]; then
  usage
  exit 0
fi

REPO=
TAG=
BASEDOCKERFILE=
NOCACHEARG=
BUILD_PARAMS=
IMAGE_REF=
while getopts f:mr:t:nb:i: option
do
 case "${option}"
 in
 f) BASEDOCKERFILE=${OPTARG};;
 r) REPO=${OPTARG};;
 t) TAG=${OPTARG};;
 n) NOCACHEARG="--no-cache";;
 i) IMAGE_REF=${OPTARG};;
 b) BUILD_PARAMS=${BUILD_PARAMS}" --build-arg "${OPTARG};;
 esac
done

case "${@: -1}" in
  build)
    build
    ;;
  push)
    if [ -z "$REPO" ]; then
      usage
      exit 1
    fi
    push
    ;;
  *)
    usage
    exit 1
    ;;
esac

Nrog nws cov kev pab, peb sib sau ua ke ib qho yooj yim Spark duab uas muaj cov hauj lwm sim rau xam Pi siv Spark (ntawm no {docker-registry-url} yog qhov URL ntawm koj Docker duab sau npe, {repo} yog lub npe ntawm cov chaw khaws ntaub ntawv hauv cov npe, uas phim rau qhov project hauv OpenShift , {image-name} - lub npe ntawm cov duab (yog tias peb-theem sib cais ntawm cov duab siv, piv txwv li, xws li hauv kev sau npe ntawm Red Hat OpenShift dluab), {tag} - tag ntawm qhov no version ntawm daim duab):

./bin/docker-image-tool-upd.sh -f resource-managers/kubernetes/docker/src/main/dockerfiles/spark/Dockerfile -r {docker-registry-url}/{repo} -i {image-name} -t {tag} build

Nkag mus rau OKD pawg siv cov khoom siv console (ntawm no {OKD-API-URL} yog OKD pawg API URL):

oc login {OKD-API-URL}

Cia peb tau txais tus neeg siv tam sim no lub cim rau kev tso cai hauv Docker Registry:

oc whoami -t

Nkag mus rau hauv Docker Registry ntawm pawg OKD (peb siv lub token tau txais siv cov lus txib dhau los ua tus password):

docker login {docker-registry-url}

Cia peb muab cov duab Docker sib sau ua ke rau Docker Registry OKD:

./bin/docker-image-tool-upd.sh -r {docker-registry-url}/{repo} -i {image-name} -t {tag} push

Cia peb xyuas tias cov duab sib dhos muaj nyob hauv OKD. Txhawm rau ua qhov no, qhib qhov URL hauv qhov browser nrog cov npe ntawm cov duab ntawm cov haujlwm sib raug (ntawm no {project} yog lub npe ntawm qhov project hauv OpenShift pawg, {OKD-WEBUI-URL} yog URL ntawm OpenShift Web console ) - https://{OKD-WEBUI-URL}/console /project/{project}/browse/images/{image-name}.

Txhawm rau ua haujlwm, ib tus account pabcuam yuav tsum raug tsim nrog cov cai los khiav pods li hauv paus (peb yuav tham txog qhov no tom qab):

oc create sa spark -n {project}
oc adm policy add-scc-to-user anyuid -z spark -n {project}

Cia peb khiav lub txim-xa lus txib kom luam tawm Spark ua haujlwm rau OKD pawg, qhia meej txog cov kev pabcuam tsim thiab cov duab Docker:

 /opt/spark/bin/spark-submit --name spark-test --class org.apache.spark.examples.SparkPi --conf spark.executor.instances=3 --conf spark.kubernetes.authenticate.driver.serviceAccountName=spark --conf spark.kubernetes.namespace={project} --conf spark.submit.deployMode=cluster --conf spark.kubernetes.container.image={docker-registry-url}/{repo}/{image-name}:{tag} --conf spark.master=k8s://https://{OKD-API-URL}  local:///opt/spark/examples/target/scala-2.11/jars/spark-examples_2.11-2.4.5.jar

Ntawm no:

-lub npe - lub npe ntawm txoj haujlwm uas yuav koom nrog kev tsim lub npe ntawm Kubernetes pods;

-chav kawm - chav kawm ntawm cov ntaub ntawv executable, hu ua thaum pib ua haujlwm;

-conf - Spark configuration tsis;

spark.executor.instances β€” tus naj npawb ntawm Spark executors los tua;

spark.kubernetes.authenticate.driver.serviceAccountName - lub npe ntawm Kubernetes cov kev pabcuam tus account siv thaum tso cov pods (los txhais cov ntsiab lus kev nyab xeeb thiab muaj peev xwm thaum cuam tshuam nrog Kubernetes API);

spark.kubernetes.namespace β€” Kubernetes namespace nyob rau hauv uas tus tsav tsheb thiab executor pods yuav raug launched;

spark.submit.deployMode β€” txheej txheem ntawm launching Spark (rau standard spark-xa "cluster" yog siv, rau Spark Operator thiab tom qab versions ntawm Spark "neeg");

spark.kubernetes.container.image - Docker duab siv los tso cov pods;

spark.master - Kubernetes API URL (sab nraud yog teev kom nkag tau tshwm sim los ntawm lub tshuab hauv zos);

local // yog txoj hauv kev mus rau Spark executable hauv Docker duab.

Peb mus rau qhov sib txuas OKD qhov project thiab kawm cov ntaub ntawv tsim - https://{OKD-WEBUI-URL}/console/project/{project}/browse/pods.

Txhawm rau ua kom yooj yim txoj kev txhim kho, lwm txoj kev xaiv tuaj yeem siv tau, uas yog ib qho qauv duab ntawm Spark yog tsim, siv los ntawm txhua txoj haujlwm los khiav, thiab snapshots ntawm cov ntaub ntawv raug muab luam tawm rau sab nraud cia (piv txwv li, Hadoop) thiab teev thaum hu. spark-xa raws li qhov txuas. Hauv qhov no, koj tuaj yeem ua haujlwm sib txawv ntawm Spark cov haujlwm yam tsis tas yuav rov tsim Docker cov duab, siv, piv txwv li, WebHDFS los tshaj tawm cov duab. Peb xa daim ntawv thov los tsim cov ntaub ntawv (ntawm no {host} yog tus tswv tsev ntawm WebHDFS qhov kev pabcuam, {port} yog qhov chaw nres nkoj ntawm WebHDFS qhov kev pabcuam, {path-to-file-on-hdfs} yog txoj hauv kev xav tau rau cov ntaub ntawv ntawm HDFS):

curl -i -X PUT "http://{host}:{port}/webhdfs/v1/{path-to-file-on-hdfs}?op=CREATE

Koj yuav tau txais cov lus teb zoo li no (ntawm no {qhov chaw} yog qhov URL uas yuav tsum tau siv los rub tawm cov ntaub ntawv):

HTTP/1.1 307 TEMPORARY_REDIRECT
Location: {location}
Content-Length: 0

Load Spark executable file rau HDFS (ntawm no {path-to-local-file} yog txoj hauv kev rau Spark executable file ntawm tus tswv tsev tam sim no):

curl -i -X PUT -T {path-to-local-file} "{location}"

Tom qab no, peb tuaj yeem ua qhov xaim xaim siv Spark cov ntaub ntawv xa mus rau HDFS (ntawm no {chav kawm-npe} yog lub npe ntawm chav kawm uas yuav tsum tau pib ua haujlwm kom tiav):

/opt/spark/bin/spark-submit --name spark-test --class {class-name} --conf spark.executor.instances=3 --conf spark.kubernetes.authenticate.driver.serviceAccountName=spark --conf spark.kubernetes.namespace={project} --conf spark.submit.deployMode=cluster --conf spark.kubernetes.container.image={docker-registry-url}/{repo}/{image-name}:{tag} --conf spark.master=k8s://https://{OKD-API-URL}  hdfs://{host}:{port}/{path-to-file-on-hdfs}

Nws yuav tsum raug sau tseg tias txhawm rau nkag mus rau HDFS thiab xyuas kom cov haujlwm ua haujlwm, koj yuav tsum tau hloov Dockerfile thiab entrypoint.sh tsab ntawv - ntxiv cov lus qhia rau Dockerfile kom luam cov tsev qiv ntawv nyob rau hauv /opt/spark/jars directory thiab suav nrog HDFS cov ntaub ntawv teeb tsa hauv SPARK_CLASSPATH hauv qhov nkag. sh.

Qhov thib ob siv rooj plaub - Apache Livy

Tsis tas li ntawd, thaum tsim ib txoj haujlwm thiab cov txiaj ntsig yuav tsum tau sim, cov lus nug tshwm sim ntawm kev tso nws ua ib feem ntawm cov txheej txheem CI / CD thiab taug qab cov xwm txheej ntawm nws qhov kev ua tiav. Tau kawg, koj tuaj yeem khiav nws siv lub suab xa xov hauv zos, tab sis qhov no ua rau nyuaj rau CI / CD infrastructure vim nws yuav tsum tau txhim kho thiab teeb tsa Spark ntawm CI server tus neeg sawv cev / khiav thiab teeb tsa kev nkag mus rau Kubernetes API. Rau cov ntaub ntawv no, lub hom phiaj siv tau xaiv los siv Apache Livy ua REST API rau kev khiav Spark cov haujlwm tuav hauv Kubernetes pawg. Nrog nws cov kev pab, koj tuaj yeem khiav Spark cov haujlwm ntawm Kubernetes pawg siv cov kev thov cURL tsis tu ncua, uas yooj yim siv raws li cov kev daws teeb meem CI, thiab nws qhov chaw tso rau hauv Kubernetes pawg daws qhov teeb meem ntawm kev lees paub thaum cuam tshuam nrog Kubernetes API.

Khiav Apache Spark ntawm Kubernetes

Cia peb hais txog nws yog qhov siv thib ob - khiav Spark cov haujlwm ua ib feem ntawm cov txheej txheem CI / CD ntawm Kubernetes pawg hauv qhov ntsuas ntsuas.

Ib me ntsis txog Apache Livy - nws ua haujlwm raws li HTTP neeg rau zaub mov uas muab lub vev xaib sib txuas thiab RESTful API uas tso cai rau koj mus rau deb tshaj tawm qhov xaim xaim los ntawm kev hla qhov tsim nyog. Kev lig kev cai nws tau raug xa mus ua ib feem ntawm kev faib tawm HDP, tab sis kuj tseem tuaj yeem xa mus rau OKD lossis lwm yam Kubernetes kev teeb tsa siv qhov tsim nyog manifest thiab ib txheej ntawm Docker cov duab, xws li qhov no - github.com/ttauveron/k8s-big-data-experiments/tree/master/livy-spark-2.3. Rau peb cov ntaub ntawv, cov duab Docker zoo sib xws tau tsim, suav nrog Spark version 2.4.5 los ntawm Dockerfile hauv qab no:

FROM java:8-alpine

ENV SPARK_HOME=/opt/spark
ENV LIVY_HOME=/opt/livy
ENV HADOOP_CONF_DIR=/etc/hadoop/conf
ENV SPARK_USER=spark

WORKDIR /opt

RUN apk add --update openssl wget bash && 
    wget -P /opt https://downloads.apache.org/spark/spark-2.4.5/spark-2.4.5-bin-hadoop2.7.tgz && 
    tar xvzf spark-2.4.5-bin-hadoop2.7.tgz && 
    rm spark-2.4.5-bin-hadoop2.7.tgz && 
    ln -s /opt/spark-2.4.5-bin-hadoop2.7 /opt/spark

RUN wget http://mirror.its.dal.ca/apache/incubator/livy/0.7.0-incubating/apache-livy-0.7.0-incubating-bin.zip && 
    unzip apache-livy-0.7.0-incubating-bin.zip && 
    rm apache-livy-0.7.0-incubating-bin.zip && 
    ln -s /opt/apache-livy-0.7.0-incubating-bin /opt/livy && 
    mkdir /var/log/livy && 
    ln -s /var/log/livy /opt/livy/logs && 
    cp /opt/livy/conf/log4j.properties.template /opt/livy/conf/log4j.properties

ADD livy.conf /opt/livy/conf
ADD spark-defaults.conf /opt/spark/conf/spark-defaults.conf
ADD entrypoint.sh /entrypoint.sh

ENV PATH="/opt/livy/bin:${PATH}"

EXPOSE 8998

ENTRYPOINT ["/entrypoint.sh"]
CMD ["livy-server"]

Cov duab tsim tau tuaj yeem tsim thiab xa mus rau koj qhov chaw cia Docker uas twb muaj lawm, xws li OKD repository sab hauv. Txhawm rau xa nws, siv cov ntawv qhia nram qab no ({registry-url} - URL ntawm Docker duab sau npe, {image-name} - Docker duab lub npe, {tag} - Docker duab tag, {livy-url} - xav tau URL qhov twg neeg rau zaub mov yuav siv tau Livy; qhov "Route" manifest yog siv yog Red Hat OpenShift yog siv raws li Kubernetes faib, txwv tsis pub tus coj Ingress lossis Service manifest ntawm hom NodePort yog siv):

---
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    component: livy
  name: livy
spec:
  progressDeadlineSeconds: 600
  replicas: 1
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      component: livy
  strategy:
    rollingUpdate:
      maxSurge: 25%
      maxUnavailable: 25%
    type: RollingUpdate
  template:
    metadata:
      creationTimestamp: null
      labels:
        component: livy
    spec:
      containers:
        - command:
            - livy-server
          env:
            - name: K8S_API_HOST
              value: localhost
            - name: SPARK_KUBERNETES_IMAGE
              value: 'gnut3ll4/spark:v1.0.14'
          image: '{registry-url}/{image-name}:{tag}'
          imagePullPolicy: Always
          name: livy
          ports:
            - containerPort: 8998
              name: livy-rest
              protocol: TCP
          resources: {}
          terminationMessagePath: /dev/termination-log
          terminationMessagePolicy: File
          volumeMounts:
            - mountPath: /var/log/livy
              name: livy-log
            - mountPath: /opt/.livy-sessions/
              name: livy-sessions
            - mountPath: /opt/livy/conf/livy.conf
              name: livy-config
              subPath: livy.conf
            - mountPath: /opt/spark/conf/spark-defaults.conf
              name: spark-config
              subPath: spark-defaults.conf
        - command:
            - /usr/local/bin/kubectl
            - proxy
            - '--port'
            - '8443'
          image: 'gnut3ll4/kubectl-sidecar:latest'
          imagePullPolicy: Always
          name: kubectl
          ports:
            - containerPort: 8443
              name: k8s-api
              protocol: TCP
          resources: {}
          terminationMessagePath: /dev/termination-log
          terminationMessagePolicy: File
      dnsPolicy: ClusterFirst
      restartPolicy: Always
      schedulerName: default-scheduler
      securityContext: {}
      serviceAccount: spark
      serviceAccountName: spark
      terminationGracePeriodSeconds: 30
      volumes:
        - emptyDir: {}
          name: livy-log
        - emptyDir: {}
          name: livy-sessions
        - configMap:
            defaultMode: 420
            items:
              - key: livy.conf
                path: livy.conf
            name: livy-config
          name: livy-config
        - configMap:
            defaultMode: 420
            items:
              - key: spark-defaults.conf
                path: spark-defaults.conf
            name: livy-config
          name: spark-config
---
apiVersion: v1
kind: ConfigMap
metadata:
  name: livy-config
data:
  livy.conf: |-
    livy.spark.deploy-mode=cluster
    livy.file.local-dir-whitelist=/opt/.livy-sessions/
    livy.spark.master=k8s://http://localhost:8443
    livy.server.session.state-retain.sec = 8h
  spark-defaults.conf: 'spark.kubernetes.container.image        "gnut3ll4/spark:v1.0.14"'
---
apiVersion: v1
kind: Service
metadata:
  labels:
    app: livy
  name: livy
spec:
  ports:
    - name: livy-rest
      port: 8998
      protocol: TCP
      targetPort: 8998
  selector:
    component: livy
  sessionAffinity: None
  type: ClusterIP
---
apiVersion: route.openshift.io/v1
kind: Route
metadata:
  labels:
    app: livy
  name: livy
spec:
  host: {livy-url}
  port:
    targetPort: livy-rest
  to:
    kind: Service
    name: livy
    weight: 100
  wildcardPolicy: None

Tom qab ua ntawv thov nws thiab ua tiav lub pod, Livy graphical interface muaj nyob rau ntawm qhov txuas: http://{livy-url}/ui. Nrog Livy, peb tuaj yeem tshaj tawm peb txoj haujlwm Spark siv REST thov los ntawm, piv txwv li, Postman. Ib qho piv txwv ntawm kev sau nrog kev thov yog nthuav tawm hauv qab no (cov lus sib cav nrog cov kev hloov pauv uas tsim nyog rau kev ua haujlwm ntawm kev ua haujlwm tau ua tiav hauv "args" array):

{
    "info": {
        "_postman_id": "be135198-d2ff-47b6-a33e-0d27b9dba4c8",
        "name": "Spark Livy",
        "schema": "https://schema.getpostman.com/json/collection/v2.1.0/collection.json"
    },
    "item": [
        {
            "name": "1 Submit job with jar",
            "request": {
                "method": "POST",
                "header": [
                    {
                        "key": "Content-Type",
                        "value": "application/json"
                    }
                ],
                "body": {
                    "mode": "raw",
                    "raw": "{nt"file": "local:///opt/spark/examples/target/scala-2.11/jars/spark-examples_2.11-2.4.5.jar", nt"className": "org.apache.spark.examples.SparkPi",nt"numExecutors":1,nt"name": "spark-test-1",nt"conf": {ntt"spark.jars.ivy": "/tmp/.ivy",ntt"spark.kubernetes.authenticate.driver.serviceAccountName": "spark",ntt"spark.kubernetes.namespace": "{project}",ntt"spark.kubernetes.container.image": "{docker-registry-url}/{repo}/{image-name}:{tag}"nt}n}"
                },
                "url": {
                    "raw": "http://{livy-url}/batches",
                    "protocol": "http",
                    "host": [
                        "{livy-url}"
                    ],
                    "path": [
                        "batches"
                    ]
                }
            },
            "response": []
        },
        {
            "name": "2 Submit job without jar",
            "request": {
                "method": "POST",
                "header": [
                    {
                        "key": "Content-Type",
                        "value": "application/json"
                    }
                ],
                "body": {
                    "mode": "raw",
                    "raw": "{nt"file": "hdfs://{host}:{port}/{path-to-file-on-hdfs}", nt"className": "{class-name}",nt"numExecutors":1,nt"name": "spark-test-2",nt"proxyUser": "0",nt"conf": {ntt"spark.jars.ivy": "/tmp/.ivy",ntt"spark.kubernetes.authenticate.driver.serviceAccountName": "spark",ntt"spark.kubernetes.namespace": "{project}",ntt"spark.kubernetes.container.image": "{docker-registry-url}/{repo}/{image-name}:{tag}"nt},nt"args": [ntt"HADOOP_CONF_DIR=/opt/spark/hadoop-conf",ntt"MASTER=k8s://https://kubernetes.default.svc:8443"nt]n}"
                },
                "url": {
                    "raw": "http://{livy-url}/batches",
                    "protocol": "http",
                    "host": [
                        "{livy-url}"
                    ],
                    "path": [
                        "batches"
                    ]
                }
            },
            "response": []
        }
    ],
    "event": [
        {
            "listen": "prerequest",
            "script": {
                "id": "41bea1d0-278c-40c9-ad42-bf2e6268897d",
                "type": "text/javascript",
                "exec": [
                    ""
                ]
            }
        },
        {
            "listen": "test",
            "script": {
                "id": "3cdd7736-a885-4a2d-9668-bd75798f4560",
                "type": "text/javascript",
                "exec": [
                    ""
                ]
            }
        }
    ],
    "protocolProfileBehavior": {}
}

Cia peb ua tiav thawj qhov kev thov los ntawm kev sau, mus rau OKD interface thiab xyuas tias cov haujlwm tau pib ua tiav - https://{OKD-WEBUI-URL}/console/project/{project}/browse/pods. Tib lub sijhawm, kev sib tham yuav tshwm sim hauv Livy interface (http://{livy-url}/ui), nyob rau hauv uas, siv Livy API lossis graphical interface, koj tuaj yeem taug qab qhov kev ua tiav ntawm txoj haujlwm thiab kawm qhov kev sib tham. cov ntaub ntawv.

Tam sim no cia peb qhia seb Livy ua haujlwm li cas. Txhawm rau ua qhov no, cia peb tshuaj xyuas cov cav ntawm Livy ntim hauv lub plhaub nrog Livy server - https://{OKD-WEBUI-URL}/console/project/{project}/browse/pods/{livy-pod-name }?tab=logs. Los ntawm lawv peb tuaj yeem pom tias thaum hu rau Livy REST API hauv lub thawv hu ua "livy", lub txim xa tawm raug tua, zoo ib yam li qhov peb tau siv saum toj no (ntawm no {livy-pod-name} yog lub npe ntawm cov khoom tsim. nrog Livy server). Cov ntawv sau kuj tseem qhia txog cov lus nug thib ob uas tso cai rau koj los khiav cov haujlwm uas nyob deb ntawm qhov chaw ua haujlwm Spark siv lub Livy server.

Cov ntaub ntawv thib peb - ​​Spark Operator

Tam sim no hais tias txoj haujlwm tau raug sim, cov lus nug ntawm kev khiav nws tsis tu ncua tshwm sim. Txoj hauv kev uas ib txwm ua haujlwm tsis tu ncua hauv Kubernetes pawg yog CronJob qhov chaw thiab koj tuaj yeem siv tau, tab sis tam sim no kev siv cov neeg siv los tswj cov ntawv thov hauv Kubernetes yog nrov heev thiab rau Spark muaj ib tus neeg ua haujlwm ncaj ncees, uas kuj yog. siv hauv Enterprise-theem kev daws teeb meem (piv txwv li, Lightbend FastData Platform). Peb pom zoo kom siv nws - tam sim no ruaj khov version ntawm Spark (2.4.5) muaj qhov txwv kev xaiv xaiv rau kev khiav Spark cov haujlwm hauv Kubernetes, thaum lub sijhawm tseem ceeb tom ntej (3.0.0) tshaj tawm kev txhawb nqa rau Kubernetes, tab sis nws tseem tsis tau paub txog hnub tso tawm. . Spark Operator them nyiaj rau qhov tsis txaus no los ntawm kev ntxiv cov kev teeb tsa tseem ceeb (piv txwv li, teeb tsa ConfigMap nrog Hadoop nkag mus teeb tsa rau Spark pods) thiab lub peev xwm los khiav haujlwm tsis tu ncua.

Khiav Apache Spark ntawm Kubernetes
Cia peb hais txog nws yog qhov siv thib peb - ​​tsis tu ncua ua haujlwm Spark ntawm Kubernetes pawg hauv lub voj voog ntau lawm.

Spark Operator yog qhib qhov chaw thiab tsim nyob rau hauv Google Cloud Platform - github.com/GoogleCloudPlatform/spark-on-k8s-operator. Nws installation yuav ua tau nyob rau hauv 3 txoj kev:

  1. Raws li ib feem ntawm Lightbend FastData Platform / Cloudflow installation;
  2. Siv Helm:
    helm repo add incubator http://storage.googleapis.com/kubernetes-charts-incubator
    helm install incubator/sparkoperator --namespace spark-operator
    	

  3. Siv manifests los ntawm lub chaw cia khoom (https://github.com/GoogleCloudPlatform/spark-on-k8s-operator/tree/master/manifest). Nws yog tsim nyog sau cia cov hauv qab no - Cloudflow suav nrog tus neeg teb xov tooj nrog API version v1beta1. Yog tias hom kev teeb tsa no tau siv, Spark daim ntawv thov manifest cov lus piav qhia yuav tsum ua raws li piv txwv cim npe hauv Git nrog qhov tsim nyog API version, piv txwv li, "v1beta1-0.9.0-2.4.0". Cov version ntawm tus neeg teb xov tooj tuaj yeem pom hauv cov lus piav qhia ntawm CRD suav nrog tus neeg teb xov tooj hauv "versions" phau ntawv txhais lus:
    oc get crd sparkapplications.sparkoperator.k8s.io -o yaml
    	

Yog tias tus neeg teb xov tooj raug teeb tsa kom raug, lub tshuab ua haujlwm nrog Spark tus neeg teb xov tooj yuav tshwm sim hauv qhov sib thooj (piv txwv li, cloudflow-fdp-sparkoperator hauv Cloudflow qhov chaw rau Cloudflow installation) thiab cov khoom siv Kubernetes sib xws hu ua "sparkapplications" yuav tshwm sim. . Koj tuaj yeem tshawb xyuas cov ntawv thov Spark nrog cov lus txib hauv qab no:

oc get sparkapplications -n {project}

Txhawm rau khiav dej num siv Spark Operator koj yuav tsum ua 3 yam:

  • tsim ib daim duab Docker uas suav nrog tag nrho cov tsev qiv ntawv tsim nyog, nrog rau kev teeb tsa thiab cov ntaub ntawv ua tiav. Hauv daim duab lub hom phiaj, qhov no yog cov duab tsim nyob rau theem CI / CD thiab sim ntawm pawg xeem;
  • luam tawm Docker duab mus rau npe nkag tau los ntawm Kubernetes pawg;
  • tsim ib qho manifest nrog "SparkApplication" hom thiab cov lus piav qhia ntawm txoj haujlwm yuav tsum tau pib. Piv txwv manifests muaj nyob rau hauv lub official repository (piv txwv li. github.com/GoogleCloudPlatform/spark-on-k8s-operator/blob/v1beta1-0.9.0-2.4.0/examples/spark-pi.yaml). Muaj cov ntsiab lus tseem ceeb uas yuav tsum nco ntsoov txog lub manifesto:
    1. phau ntawv txhais lus "apiVersion" yuav tsum qhia qhov API version sib xws rau tus neeg teb xov tooj version;
    2. phau ntawv txhais lus "metadata.namespace" yuav tsum qhia lub namespace uas daim ntawv thov yuav raug tso tawm;
    3. phau ntawv txhais lus "spec.image" yuav tsum muaj qhov chaw nyob ntawm cov duab Docker tsim nyob rau hauv ib daim ntawv teev npe nkag tau;
    4. phau ntawv txhais lus "spec.mainClass" yuav tsum muaj cov chav kawm ua haujlwm Spark uas yuav tsum tau khiav thaum txheej txheem pib;
    5. phau ntawv txhais lus "spec.mainApplicationFile" yuav tsum muaj txoj hauv kev mus rau cov ntaub ntawv executable;
    6. cov phau ntawv txhais lus "spec.sparkVersion" yuav tsum qhia qhov version ntawm Spark siv;
    7. phau ntawv txhais lus "spec.driver.serviceAccount" yuav tsum qhia tus account kev pabcuam nyob rau hauv qhov sib thooj Kubernetes namespace uas yuav siv los khiav daim ntawv thov;
    8. phau ntawv txhais lus "spec.executor" yuav tsum qhia tus naj npawb ntawm cov peev txheej faib rau daim ntawv thov;
    9. phau ntawv txhais lus "spec.volumeMounts" yuav tsum qhia cov npe hauv zos uas cov ntaub ntawv Spark hauv zos yuav raug tsim.

Ib qho piv txwv ntawm kev tsim ib qho kev tshwm sim (ntawm no {spark-service-account} yog ib qho kev pabcuam hauv lub Kubernetes pawg rau kev khiav haujlwm Spark):

apiVersion: "sparkoperator.k8s.io/v1beta1"
kind: SparkApplication
metadata:
  name: spark-pi
  namespace: {project}
spec:
  type: Scala
  mode: cluster
  image: "gcr.io/spark-operator/spark:v2.4.0"
  imagePullPolicy: Always
  mainClass: org.apache.spark.examples.SparkPi
  mainApplicationFile: "local:///opt/spark/examples/jars/spark-examples_2.11-2.4.0.jar"
  sparkVersion: "2.4.0"
  restartPolicy:
    type: Never
  volumes:
    - name: "test-volume"
      hostPath:
        path: "/tmp"
        type: Directory
  driver:
    cores: 0.1
    coreLimit: "200m"
    memory: "512m"
    labels:
      version: 2.4.0
    serviceAccount: {spark-service-account}
    volumeMounts:
      - name: "test-volume"
        mountPath: "/tmp"
  executor:
    cores: 1
    instances: 1
    memory: "512m"
    labels:
      version: 2.4.0
    volumeMounts:
      - name: "test-volume"
        mountPath: "/tmp"

Qhov kev tshwm sim no qhia txog qhov kev pabcuam nyiaj txiag uas, ua ntej tshaj tawm qhov tshwm sim, koj yuav tsum tsim lub luag haujlwm tsim nyog khi uas muab cov cai tsim nyog nkag rau Spark daim ntawv thov cuam tshuam nrog Kubernetes API (yog tias tsim nyog). Hauv peb qhov xwm txheej, daim ntawv thov xav tau txoj cai los tsim Pods. Cia peb tsim lub luag haujlwm tsim nyog khi:

oc adm policy add-role-to-user edit system:serviceaccount:{project}:{spark-service-account} -n {project}

Nws tseem tsim nyog sau cia tias qhov kev qhia tshwj xeeb no tuaj yeem suav nrog "hadoopConfigMap" parameter, uas tso cai rau koj los qhia meej txog ConfigMap nrog Hadoop teeb tsa yam tsis tas yuav tsum xub tso cov ntaub ntawv sib thooj hauv Docker duab. Nws kuj tseem tsim nyog rau kev ua haujlwm tsis tu ncua - siv lub sijhawm "schedule" parameter, lub sijhawm rau kev khiav haujlwm tau teev tseg.

Tom qab ntawd, peb khaws peb cov manifest mus rau lub spark-pi.yaml cov ntaub ntawv thiab siv nws rau peb Kubernetes pawg:

oc apply -f spark-pi.yaml

Qhov no yuav tsim ib qho khoom ntawm hom "sparkapplications":

oc get sparkapplications -n {project}
> NAME       AGE
> spark-pi   22h

Nyob rau hauv rooj plaub no, lub pod nrog ib daim ntawv thov yuav raug tsim, cov xwm txheej uas yuav tshwm sim nyob rau hauv lub tsim "sparkapplications". Koj tuaj yeem saib nws nrog cov lus txib hauv qab no:

oc get sparkapplications spark-pi -o yaml -n {project}

Thaum ua tiav ntawm txoj haujlwm, POD yuav txav mus rau "Tau tiav", uas tseem yuav hloov kho hauv "sparkapplications". Daim ntawv teev npe tuaj yeem pom hauv qhov browser lossis siv cov lus txib hauv qab no (ntawm no {sparkapplications-pod-name} yog lub npe ntawm lub plhaub taum ntawm txoj haujlwm khiav):

oc logs {sparkapplications-pod-name} -n {project}

Spark cov haujlwm tseem tuaj yeem tswj hwm siv cov khoom siv tshwj xeeb sparkctl. Txhawm rau nruab nws, clone lub chaw cia khoom nrog nws qhov chaws, nruab Mus thiab tsim cov khoom siv no:

git clone https://github.com/GoogleCloudPlatform/spark-on-k8s-operator.git
cd spark-on-k8s-operator/
wget https://dl.google.com/go/go1.13.3.linux-amd64.tar.gz
tar -xzf go1.13.3.linux-amd64.tar.gz
sudo mv go /usr/local
mkdir $HOME/Projects
export GOROOT=/usr/local/go
export GOPATH=$HOME/Projects
export PATH=$GOPATH/bin:$GOROOT/bin:$PATH
go -version
cd sparkctl
go build -o sparkctl
sudo mv sparkctl /usr/local/bin

Cia peb tshuaj xyuas cov npe ntawm kev ua haujlwm Spark:

sparkctl list -n {project}

Cia peb tsim cov lus piav qhia rau Spark txoj haujlwm:

vi spark-app.yaml

apiVersion: "sparkoperator.k8s.io/v1beta1"
kind: SparkApplication
metadata:
  name: spark-pi
  namespace: {project}
spec:
  type: Scala
  mode: cluster
  image: "gcr.io/spark-operator/spark:v2.4.0"
  imagePullPolicy: Always
  mainClass: org.apache.spark.examples.SparkPi
  mainApplicationFile: "local:///opt/spark/examples/jars/spark-examples_2.11-2.4.0.jar"
  sparkVersion: "2.4.0"
  restartPolicy:
    type: Never
  volumes:
    - name: "test-volume"
      hostPath:
        path: "/tmp"
        type: Directory
  driver:
    cores: 1
    coreLimit: "1000m"
    memory: "512m"
    labels:
      version: 2.4.0
    serviceAccount: spark
    volumeMounts:
      - name: "test-volume"
        mountPath: "/tmp"
  executor:
    cores: 1
    instances: 1
    memory: "512m"
    labels:
      version: 2.4.0
    volumeMounts:
      - name: "test-volume"
        mountPath: "/tmp"

Cia peb ua haujlwm piav qhia siv sparkctl:

sparkctl create spark-app.yaml -n {project}

Cia peb tshuaj xyuas cov npe ntawm kev ua haujlwm Spark:

sparkctl list -n {project}

Cia peb tshuaj xyuas cov npe ntawm cov xwm txheej ntawm kev ua haujlwm Spark:

sparkctl event spark-pi -n {project} -f

Cia peb tshuaj xyuas cov xwm txheej ntawm kev ua haujlwm Spark:

sparkctl status spark-pi -n {project}

Hauv kev xaus, kuv xav xav txog qhov pom qhov tsis zoo ntawm kev siv tam sim no ruaj khov version ntawm Spark (2.4.5) hauv Kubernetes:

  1. Thawj thiab, tej zaum, qhov tsis zoo tseem ceeb yog qhov tsis muaj Data Locality. Txawm hais tias txhua qhov tsis txaus ntawm YARN, kuj tseem muaj qhov zoo rau kev siv nws, piv txwv li, lub hauv paus ntsiab lus ntawm kev xa cov lej rau cov ntaub ntawv (tsis yog cov ntaub ntawv rau code). Ua tsaug rau nws, Spark cov haujlwm tau ua tiav ntawm cov nodes uas cov ntaub ntawv koom nrog hauv kev suav, thiab lub sijhawm nws siv los xa cov ntaub ntawv hla lub network tau txo qis. Thaum siv Kubernetes, peb tau ntsib nrog qhov yuav tsum tau txav cov ntaub ntawv koom nrog hauv kev ua haujlwm thoob plaws lub network. Yog tias lawv loj txaus, lub sijhawm ua haujlwm tuaj yeem nce ntxiv, thiab tseem xav tau ntau qhov chaw disk faib rau Spark ua haujlwm rau lawv qhov chaw cia ib ntus. Qhov tsis zoo no tuaj yeem txo tau los ntawm kev siv software tshwj xeeb uas ua kom cov ntaub ntawv hauv zos hauv Kubernetes (piv txwv li, Alluxio), tab sis qhov no txhais tau tias yuav tsum tau khaws cov ntawv luam ntawm cov ntaub ntawv ntawm Kubernetes pawg nodes.
  2. Qhov thib ob tseem ceeb tsis zoo yog kev ruaj ntseg. Los ntawm lub neej ntawd, cov yam ntxwv ntsig txog kev ruaj ntseg hais txog kev khiav Spark cov haujlwm raug kaw, kev siv Kerberos tsis suav nrog hauv cov ntaub ntawv raug cai (txawm hais tias cov kev xaiv sib raug tau qhia hauv version 3.0.0, uas yuav tsum tau ua haujlwm ntxiv), thiab cov ntaub ntawv ruaj ntseg rau siv Spark (https://spark.apache.org/docs/2.4.5/security.html) tsuas yog YARN, Mesos thiab Standalone Cluster nkaus xwb. Nyob rau tib lub sijhawm, tus neeg siv nyob rau hauv uas Spark cov haujlwm tau tshaj tawm tsis tuaj yeem qhia ncaj qha - peb tsuas yog qhia tus lej pabcuam raws li nws yuav ua haujlwm, thiab tus neeg siv raug xaiv raws li cov cai tswjfwm kev ruaj ntseg. Hauv qhov no, txawm tias tus neeg siv hauv paus yog siv, uas tsis muaj kev nyab xeeb nyob rau hauv ib puag ncig tsim khoom, lossis tus neeg siv nrog random UID, uas tsis yooj yim thaum faib cov cai nkag mus rau cov ntaub ntawv (qhov no tuaj yeem daws tau los ntawm kev tsim PodSecurityPolicies thiab txuas rau lawv. cov nyiaj pabcuam sib raug). Tam sim no, qhov kev daws teeb meem yog los tso tag nrho cov ntaub ntawv tsim nyog ncaj qha rau hauv Docker duab, lossis hloov kho Spark tso tsab ntawv los siv cov txheej txheem khaws cia thiab khaws cov ntaub ntawv zais cia hauv koj lub koom haum.
  3. Khiav Spark txoj haujlwm siv Kubernetes tseem tseem nyob rau hauv kev sim hom thiab tej zaum yuav muaj kev hloov pauv tseem ceeb hauv cov khoom siv uas siv (cov ntaub ntawv teeb tsa, Docker puag cov duab, thiab tso cov ntawv sau) yav tom ntej. Thiab qhov tseeb, thaum npaj cov khoom siv, version 2.3.0 thiab 2.4.5 raug sim, tus cwj pwm txawv txawv heev.

Cia peb tos txog qhov hloov tshiab - ib qho tshiab ntawm Spark (3.0.0) tau tso tawm tsis ntev los no, uas tau coj cov kev hloov pauv tseem ceeb rau kev ua haujlwm ntawm Spark ntawm Kubernetes, tab sis khaws qhov kev sim ntawm kev txhawb nqa rau tus thawj tswj hwm kev pabcuam no. Tej zaum qhov kev hloov tshiab tom ntej no yuav ua rau nws muaj peev xwm ua kom pom zoo tso tseg YARN thiab khiav Spark cov haujlwm ntawm Kubernetes yam tsis muaj kev ntshai rau kev ruaj ntseg ntawm koj lub cev thiab tsis tas yuav tsum tau hloov kho cov haujlwm ua haujlwm ntawm nws tus kheej.

Tis.

Tau qhov twg los: www.hab.com

Ntxiv ib saib