Kubernetes

[ Kans 3 Study - 2w ] 3. kind 활용

su''@ 2024. 9. 8. 15:25
CloudNetaStudy - Kubernets Networtk 3기 실습 스터디 게시글입니다.

 

1. 외부 도커 이미지를 kind 쿠버 클러스터에 주입 - Link
 

kind – Quick Start

Quick Start This guide covers getting started with the kind command. If you are having problems please see the known issues guide. NOTE: kind does not require kubectl, but you will not be able to perform some of the examples in our docs without it. To inst

kind.sigs.k8s.io

  • 간단한 웹 서버 도커 이미지 빌드
더보기

 

# 컨트롤플레인(노드=컨테이너)에 로컬 이미지 확인
docker exec -it $CLUSTERNAME-control-plane crictl images
docker exec -it $CLUSTERNAME-worker crictl images

# 도커 이미지를 kind 쿠버 클러스터에 주입 
kind load docker-image $NICKNAME-myweb:1.0 --name $CLUSTERNAME

# 컨트롤플레인(노드=컨테이너)에 로컬 이미지 확인
docker exec -it $CLUSTERNAME-control-plane crictl images
docker exec -it $CLUSTERNAME-worker crictl images
...
docker.io/library/gasida-myweb                  1.0                  56964cdd0ebf9       51.5MB
...

# 디플로이먼트 배포 실행 : 로컬 이미지 사용 확인
kubectl create deployment deploy-myweb --image=$NICKNAME-myweb:1.0
kubectl get deploy,pod
kubectl scale deployment deploy-myweb --replicas 2

# 서비스 배포
cat <<EOF | kubectl create -f -
apiVersion: v1
kind: Service
metadata:
  name: deploy-myweb
spec:
  type: NodePort
  ports:
  - name: svc-mywebport
    nodePort: 31001
    port: 80
  selector:
    app: deploy-myweb
EOF

# 서비스 확인
kubectl get svc,ep deploy-myweb
curl localhost:31001
  • Setting Kubernetes version : 특정 버전의 쿠버네티스 클러스터 배포 - 링크 Release Hub
    • v1.30.4
      # 클러스터 배포 : image 로 특정 버전을 바로 지정, 해당 버전 정보는 위 도커 Hub 링크 확인
      kind create cluster --image kindest/node:v1.30.4
      docker images
      
      # 노드 버전 확인
      kubectl get nodes
      
      # 삭제
      kind delete cluster

    • v1.29.4
      # 클러스터 배포
      cat <<EOT> kind-v29-4.yaml
      kind: Cluster
      apiVersion: kind.x-k8s.io/v1alpha4
      nodes:
      - role: control-plane
        image: kindest/node:v1.29.4@sha256:3abb816a5b1061fb15c6e9e60856ec40d56b7b52bcea5f5f1350bc6e2320b6f8
      EOT
      
      kind create cluster --config kind-v29-4.yaml
      
      # 노드 버전 확인
      kubectl get nodes
      
      # 삭제
      kind delete cluster
2. Ingress : Ingress Nginx , Ingress Kong , Contour 가이드 - Link
  • Ingress Nginx 설정
    • extraPortMappings allow the local host to make requests to the Ingress controller over ports 80/443
    • node-labels only allow the ingress controller to run on a specific node(s) matching the label selector
더보기
# 클러스터 배포 : 노드 라벨, 포트 맵핑
cat <<EOT> kind-ingress.yaml
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
- role: control-plane
  kubeadmConfigPatches:
  - |
    kind: InitConfiguration
    nodeRegistration:
      kubeletExtraArgs:
        node-labels: "ingress-ready=true"
  extraPortMappings:
  - containerPort: 80
    hostPort: 80
    protocol: TCP
  - containerPort: 443
    hostPort: 443
    protocol: TCP
  - containerPort: 30000
    hostPort: 30000
EOT

kind create cluster --config kind-ingress.yaml --name myk8s

# 배포 확인
docker ps
docker port myk8s-control-plane
kubectl get node

# 노드 라벨 확인
kubectl get nodes myk8s-control-plane -o jsonpath={.metadata.labels} | jq
{
  "beta.kubernetes.io/arch": "amd64",
  "beta.kubernetes.io/os": "linux",
  "ingress-ready": "true",
...

# NGINX ingress 배포
## The manifests contains kind specific patches to forward the hostPorts to the ingress controller, 
## set taint tolerations and schedule it to the custom labelled node.
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/main/deploy/static/provider/kind/deploy.yaml
-------------------------
## 호스트 포트 80,443 사용
ports:
- containerPort: 80
  hostPort: 80
  name: http
  protocol: TCP
- containerPort: 443
  hostPort: 443
  name: https
  protocol: TCP
...

## nodeSelector 로 배포 노드 지정
nodeSelector:
  ingress-ready: "true"
  kubernetes.io/os: linux

## taint 예외 tolerations 설정
tolerations:
- effect: NoSchedule
  key: node-role.kubernetes.io/master
	operator: Equal
- effect: NoSchedule
  key: node-role.kubernetes.io/control-plane
  operator: Equal
-------------------------

# ingress 배포 확인
kubectl get deploy,svc,ep ingress-nginx-controller -n ingress-nginx

# control-plane 노드(실제로는 컨테이너)에 IPTABLES에 80,443은 ingress-nginx 파드로 전달 규칙 확인 # 10.244.0.7은 ingress-nginx 파드의 IP
root@myk8s-control-plane:/# iptables -t nat -L -n -v | grep '10.244.0.7'
    0     0 DNAT       6    --  *      *       0.0.0.0/0            0.0.0.0/0            tcp dpt:80 to:10.244.0.7:80
    0     0 DNAT       6    --  *      *       0.0.0.0/0            0.0.0.0/0            tcp dpt:443 to:10.244.0.7:443
...

 

  • Ingress 사용
    • IP
더보기
# 퍄드, 서비스, ingress 배포
kubectl apply -f https://kind.sigs.k8s.io/examples/ingress/usage.yaml

# 확인
kubectl get ingress,pod -owide
kubectl get svc,ep foo-service bar-service
kubectl describe ingress example-ingress
Rules:
  Host        Path  Backends
  ----        ----  --------
  *           
              /foo(/|$)(.*)   foo-service:8080 (10.244.0.8:8080)
              /bar(/|$)(.*)   bar-service:8080 (10.244.0.9:8080)
Annotations:  nginx.ingress.kubernetes.io/rewrite-target: /$2

# 로그 모니터링
kubectl logs -n ingress-nginx deploy/ingress-nginx-controller -f

# 접속 테스트
curl localhost/foo/hostname
curl localhost/bar/hostname

kubectl exec -it foo-app -- curl localhost:8080/foo/hostname
kubectl exec -it foo-app -- curl localhost:8080/hostname

kubectl exec -it bar-app -- curl localhost:8080/bar/hostname
kubectl exec -it bar-app -- curl localhost:8080/hostname

# 삭제
kubectl delete -f https://kind.sigs.k8s.io/examples/ingress/usage.yaml

 

  • Feature Gates - 링크 + In-Place Pod Vertical Scaling - Link
    • Kubernetes 
      더보기
      feature gates can be enabled cluster-wide across all Kubernetes components with the following config:
      kind: Cluster
      apiVersion: kind.x-k8s.io/v1alpha4
      featureGates:
        # any feature gate can be enabled here with "Name": true
        # or disabled here with "Name": false
        # not all feature gates are tested, however
        "InPlacePodVerticalScaling": true
       
    • In-Place Pod Vertical Scaling → 메모리 증가 실습
      더보기
      #
      cat <<EOT> kind-FG.yaml
      kind: Cluster
      apiVersion: kind.x-k8s.io/v1alpha4
      nodes:
      - role: control-plane
      featureGates:
        "InPlacePodVerticalScaling": true
      EOT
      
      #
      kind create cluster --config kind-FG.yaml
      
      #
      kubectl describe pod -n kube-system kube-controller-manager-kind-control-plane | grep feature-gates
            --feature-gates=InPlacePodVerticalScaling=true
      
      # metrics-server
      wget https://github.com/kubernetes-sigs/metrics-server/releases/latest/download/components.yaml -O metric-server.yaml
      sed -i'' -r -e "/- --secure-port=10250/a\        - --kubelet-insecure-tls" metric-server.yaml
      kubectl apply -f metric-server.yaml
      kubectl get all -n kube-system -l k8s-app=metrics-server
      kubectl get apiservices |egrep '(AVAILABLE|metrics)'
      
      # 확인
      kubectl top node
      kubectl top pod -A --sort-by='cpu'
      kubectl top pod -A --sort-by='memory'
      
      #
      cat <<EOF | kubectl create -f -
      apiVersion: v1
      kind: Pod
      metadata:
        name: stress-pod
      spec:
        containers:
        - name: stress
          image: alpine:latest
          command: ["sh", "-c", "apk add --no-cache stress-ng && stress-ng --cpu 1"]
          resources:
            limits:
              memory: "200Mi"
              cpu: "700m"
            requests:
              memory: "200Mi"
              cpu: "500m"
      EOF
      
      kubectl get pod stress-pod --output=yaml
      ...
      spec:
        containers:
          ...
          resizePolicy:
          - resourceName: cpu
            restartPolicy: NotRequired
          - resourceName: memory
            restartPolicy: NotRequired
          resources:
            limits:
              cpu: 700m
              memory: 200Mi
            requests:
              cpu: 500m
              memory: 200Mi
      ...
        containerStatuses:
      ...
          name: qos-demo-ctr-5
          ready: true
      ...
          allocatedResources:
            cpu: 700m
            memory: 200Mi
          resources:
            limits:
              cpu: 700m
              memory: 200Mi
            requests:
              cpu: 700m
              memory: 200Mi
          restartCount: 0
          started: true
      ...
        qosClass: Burstable?
      ...
      
      #
      kubectl get pod stress-pod --output=yaml
      kubectl top pod
      NAME         CPU(cores)   MEMORY(bytes)   
      stress-pod   701m         24Mi           
      
      # Now, patch the Pod's Container with CPU requests & limits both set to 800m:
      kubectl patch pod stress-pod --patch '{"spec":{"containers":[{"name":"stress", "resources":{"requests":{"cpu":"500m"}, "limits":{"cpu":"1000m"}}}]}}'
      
      #
      kubectl top pod
      NAME         CPU(cores)   MEMORY(bytes)   
      stress-pod   1000m        24Mi    
      
      kubectl get pod stress-pod --output=yaml
      ...
      spec:
        containers:
          ...
          resources:
            limits:
              cpu: "1"
              memory: 200Mi
            requests:
              cpu: 500m
              memory: 200Mi
      ...
        containerStatuses:
      ...
          allocatedResources:
            cpu: 500m
            memory: 200Mi
          resources:
            limits:
              cpu: "1"
              memory: 200Mi
            requests:
              cpu: 500m
              memory: 200Mi
          restartCount: 0
          started: true
      ...
      
      
      # 실습 삭제
      kubectl delete pod stress-pod
      kind delete cluster

      https://medium.com/@seifeddinerajhi/the-new-in-place-kubernetes-pod-resource-resizing-feature-a-deep-dive-11b0ece334ef