Kubernetes

[ Kans 3 Study - 9w ] 4. Topology Aware Routing / Network Policies with VPC CNI

su''@ 2024. 11. 2. 22:42
CloudNetaStudy - Kubernets Networtk 3기 실습 스터디 게시글입니다.

 

[ 참고 링크 ]

10. Topology Aware Routing
  • 테스트를 위한 디플로이먼트와 서비스 배포
    # 현재 노드 AZ 배포 확인
    kubectl get node --label-columns=topology.kubernetes.io/zone
    NAME                                               STATUS   ROLES    AGE   VERSION                ZONE
    ip-192-168-1-225.ap-northeast-2.compute.internal   Ready    <none>   70m   v1.24.11-eks-a59e1f0   ap-northeast-2a
    ip-192-168-2-248.ap-northeast-2.compute.internal   Ready    <none>   70m   v1.24.11-eks-a59e1f0   ap-northeast-2b
    ip-192-168-3-228.ap-northeast-2.compute.internal   Ready    <none>   70m   v1.24.11-eks-a59e1f0   ap-northeast-2c
    
    # 테스트를 위한 디플로이먼트와 서비스 배포
    cat <<EOF | kubectl apply -f -
    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: deploy-echo
    spec:
      replicas: 3
      selector:
        matchLabels:
          app: deploy-websrv
      template:
        metadata:
          labels:
            app: deploy-websrv
        spec:
          terminationGracePeriodSeconds: 0
          containers:
          - name: websrv
            image: registry.k8s.io/echoserver:1.5
            ports:
            - containerPort: 8080
    ---
    apiVersion: v1
    kind: Service
    metadata:
      name: svc-clusterip
    spec:
      ports:
        - name: svc-webport
          port: 80
          targetPort: 8080
      selector:
        app: deploy-websrv
      type: ClusterIP
    EOF
    
    # 확인
    kubectl get deploy,svc,ep,endpointslices
    kubectl get pod -owide
    kubectl get svc,ep svc-clusterip
    kubectl get endpointslices -l kubernetes.io/service-name=svc-clusterip
    kubectl get endpointslices -l kubernetes.io/service-name=svc-clusterip -o yaml
    
    # 접속 테스트를 수행할 클라이언트 파드 배포
    cat <<EOF | kubectl apply -f -
    apiVersion: v1
    kind: Pod
    metadata:
      name: netshoot-pod
    spec:
      containers:
      - name: netshoot-pod
        image: nicolaka/netshoot
        command: ["tail"]
        args: ["-f", "/dev/null"]
      terminationGracePeriodSeconds: 0
    EOF
    
    # 확인
    kubectl get pod -owide



  • 테스트 파드(netshoot-pod)에서 ClusterIP 접속 시 부하분산 확인 : AZ(zone) 상관없이 랜덤 확률 부하분산 동작
    # 디플로이먼트 파드가 배포된 AZ(zone) 확인
    kubectl get pod -l app=deploy-websrv -owide
    
    # 테스트 파드(netshoot-pod)에서 ClusterIP 접속 시 부하분산 확인
    kubectl exec -it netshoot-pod -- curl svc-clusterip | grep Hostname
    Hostname: deploy-echo-7f67d598dc-h9vst
    
    kubectl exec -it netshoot-pod -- curl svc-clusterip | grep Hostname
    Hostname: deploy-echo-7f67d598dc-45trg
    
    # 100번 반복 접속 : 3개의 파드로 AZ(zone) 상관없이 랜덤 확률 부하분산 동작
    kubectl exec -it netshoot-pod -- zsh -c "for i in {1..100}; do curl -s svc-clusterip | grep Hostname; done | sort | uniq -c | sort -nr"
      35 Hostname: deploy-echo-7f67d598dc-45trg
      33 Hostname: deploy-echo-7f67d598dc-hg995
      32 Hostname: deploy-echo-7f67d598dc-h9vst

    • (심화) IPTables 정책 확인 : ClusterIP는 KUBE-SVC-Y → KUBE-SEP-Z… (3곳) ⇒ 즉, 3개의 파드로 랜덤 확률 부하분산 동작
      #
      ssh ec2-user@$N1 sudo iptables -t nat -nvL
      ssh ec2-user@$N1 sudo iptables -v --numeric --table nat --list PREROUTING
      ssh ec2-user@$N1 sudo iptables -v --numeric --table nat --list KUBE-SERVICES
        305 18300 KUBE-SVC-KBDEBIL6IU6WL7RF  tcp  --  *      *       0.0.0.0/0            10.100.155.216       /* default/svc-clusterip:svc-webport cluster IP */ tcp dpt:80
        ...
      
      # 노드1에서 SVC 정책 확인 : SEP(Endpoint) 파드 3개 확인 >> 즉, 3개의 파드로 랜덤 확률 부하분산 동작
      ssh ec2-user@$N1 sudo iptables -v --numeric --table nat --list KUBE-SVC-KBDEBIL6IU6WL7RF
      Chain KUBE-SVC-KBDEBIL6IU6WL7RF (1 references)
       pkts bytes target     prot opt in     out     source               destination
        108  6480 KUBE-SEP-WC4ARU3RZJKCUD7M  all  --  *      *       0.0.0.0/0            0.0.0.0/0            /* default/svc-clusterip:svc-webport -> 192.168.1.240:8080 */ statistic mode random probability 0.33333333349
        115  6900 KUBE-SEP-3HFAJH523NG6SBCX  all  --  *      *       0.0.0.0/0            0.0.0.0/0            /* default/svc-clusterip:svc-webport -> 192.168.2.36:8080 */ statistic mode random probability 0.50000000000
         82  4920 KUBE-SEP-H37XIVQWZO52OMNP  all  --  *      *       0.0.0.0/0            0.0.0.0/0            /* default/svc-clusterip:svc-webport -> 192.168.3.13:8080 */
      
      # 노드2에서 동일한 SVC 이름 정책 확인 : 상동
      ssh ec2-user@$N2 sudo iptables -v --numeric --table nat --list KUBE-SVC-KBDEBIL6IU6WL7RF
      (상동)
      
      # 노드3에서 동일한 SVC 이름 정책 확인 : 상동
      ssh ec2-user@$N3 sudo iptables -v --numeric --table nat --list KUBE-SVC-KBDEBIL6IU6WL7RF
      (상동)
      
      # 3개의 SEP는 각각 개별 파드 접속 정보
      ssh ec2-user@$N1 sudo iptables -v --numeric --table nat --list KUBE-SEP-WC4ARU3RZJKCUD7M
      Chain KUBE-SEP-WC4ARU3RZJKCUD7M (1 references)
       pkts bytes target     prot opt in     out     source               destination
          0     0 KUBE-MARK-MASQ  all  --  *      *       192.168.1.240        0.0.0.0/0            /* default/svc-clusterip:svc-webport */
        108  6480 DNAT       tcp  --  *      *       0.0.0.0/0            0.0.0.0/0            /* default/svc-clusterip:svc-webport */ tcp to:192.168.1.240:8080
      
      ssh ec2-user@$N1 sudo iptables -v --numeric --table nat --list KUBE-SEP-3HFAJH523NG6SBCX
      Chain KUBE-SEP-3HFAJH523NG6SBCX (1 references)
       pkts bytes target     prot opt in     out     source               destination
          0     0 KUBE-MARK-MASQ  all  --  *      *       192.168.2.36         0.0.0.0/0            /* default/svc-clusterip:svc-webport */
        115  6900 DNAT       tcp  --  *      *       0.0.0.0/0            0.0.0.0/0            /* default/svc-clusterip:svc-webport */ tcp to:192.168.2.36:8080
      
      ssh ec2-user@$N1 sudo iptables -v --numeric --table nat --list KUBE-SEP-H37XIVQWZO52OMNP
      Chain KUBE-SEP-H37XIVQWZO52OMNP (1 references)
       pkts bytes target     prot opt in     out     source               destination
          0     0 KUBE-MARK-MASQ  all  --  *      *       192.168.3.13         0.0.0.0/0            /* default/svc-clusterip:svc-webport */
         82  4920 DNAT       tcp  --  *      *       0.0.0.0/0            0.0.0.0/0            /* default/svc-clusterip:svc-webport */ tcp to:192.168.3.13:8080

    • Topology Mode(구 Aware Hint) 설정 후 테스트 파드(netshoot-pod)에서 ClusterIP 접속 시 부하분산 확인 : 같은 AZ(zone)의 목적지 파드로만 접속
      • 힌트는 엔드포인트가 트래픽을 제공해야 하는 영역을 설명합니다. 그런 다음 적용된 힌트kube-proxy 에 따라 영역에서 엔드포인트로 트래픽을 라우팅.
        출처 : https://docs.aws.amazon.com/eks/latest/best-practices/cost-opt-networking.html

        # Topology Aware Routing 설정 : 서비스에 annotate에 아래처럼 추가
        kubectl annotate service svc-clusterip "service.kubernetes.io/topology-mode=auto"
        
        # 100번 반복 접속 : 테스트 파드(netshoot-pod)와 같은 AZ(zone)의 목적지 파드로만 접속
        kubectl exec -it netshoot-pod -- zsh -c "for i in {1..100}; do curl -s svc-clusterip | grep Hostname; done | sort | uniq -c | sort -nr"
          100 Hostname: deploy-echo-7f67d598dc-45trg
        
        # endpointslices 확인 시, 기존에 없던 hints 가 추가되어 있음 >> 참고로 describe로는 hints 정보가 출력되지 않음
        kubectl get endpointslices -l kubernetes.io/service-name=svc-clusterip -o yaml
        apiVersion: v1
        items:
        - addressType: IPv4
          apiVersion: discovery.k8s.io/v1
          endpoints:
          - addresses:
            - 192.168.3.13
            conditions:
              ready: true
              serving: true
              terminating: false
            hints:
              forZones:
              - name: ap-northeast-2c
            nodeName: ip-192-168-3-228.ap-northeast-2.compute.internal
            targetRef:
              kind: Pod
              name: deploy-echo-7f67d598dc-hg995
              namespace: default
              uid: c1ce0e9c-14e7-417d-a1b9-2dfd54da8d4a
            zone: ap-northeast-2c
          - addresses:
            - 192.168.2.65
            conditions:
              ready: true
              serving: true
              terminating: false
            hints:
              forZones:
              - name: ap-northeast-2b
            nodeName: ip-192-168-2-248.ap-northeast-2.compute.internal
            targetRef:
              kind: Pod
              name: deploy-echo-7f67d598dc-h9vst
              namespace: default
              uid: 77af6a1b-c600-456c-96f3-e1af621be2af
            zone: ap-northeast-2b
          - addresses:
            - 192.168.1.240
            conditions:
              ready: true
              serving: true
              terminating: false
            hints:
              forZones:
              - name: ap-northeast-2a
            nodeName: ip-192-168-1-225.ap-northeast-2.compute.internal
            targetRef:
              kind: Pod
              name: deploy-echo-7f67d598dc-45trg
              namespace: default
              uid: 53ca3ac7-b9fb-4d98-a3f5-c312e60b1e67
            zone: ap-northeast-2a
          kind: EndpointSlice
        ...
      • (심화) IPTables 정책 확인 : ClusterIP는 KUBE-SVC-Y → KUBE-SEP-Z… (1곳, 해당 노드와 같은 AZ에 배포된 파드만 출력) ⇒ 동일 AZ간 접속
        ssh ec2-user@$N1 sudo iptables -v --numeric --table nat --list KUBE-SERVICES
        
        # 노드1에서 SVC 정책 확인 : SEP(Endpoint) 파드 1개 확인(해당 노드와 같은 AZ에 배포된 파드만 출력) >> 동일 AZ간 접속
        ssh ec2-user@$N1 sudo iptables -v --numeric --table nat --list KUBE-SVC-KBDEBIL6IU6WL7RF
        Chain KUBE-SVC-KBDEBIL6IU6WL7RF (1 references)
         pkts bytes target     prot opt in     out     source               destination
            0     0 KUBE-SEP-WC4ARU3RZJKCUD7M  all  --  *      *       0.0.0.0/0            0.0.0.0/0            /* default/svc-clusterip:svc-webport -> 192.168.1.240:8080 */
        
        # 노드2에서 SVC 정책 확인 : SEP(Endpoint) 파드 1개 확인(해당 노드와 같은 AZ에 배포된 파드만 출력) >> 동일 AZ간 접속
        ssh ec2-user@$N2 sudo iptables -v --numeric --table nat --list KUBE-SVC-KBDEBIL6IU6WL7RF
        Chain KUBE-SVC-KBDEBIL6IU6WL7RF (1 references)
         pkts bytes target     prot opt in     out     source               destination
            0     0 KUBE-SEP-3HFAJH523NG6SBCX  all  --  *      *       0.0.0.0/0            0.0.0.0/0            /* default/svc-clusterip:svc-webport -> 192.168.2.36:8080 */
        
        # 노드3에서 SVC 정책 확인 : SEP(Endpoint) 파드 1개 확인(해당 노드와 같은 AZ에 배포된 파드만 출력) >> 동일 AZ간 접속
        ssh ec2-user@$N3 sudo iptables -v --numeric --table nat --list KUBE-SVC-KBDEBIL6IU6WL7RF
        Chain KUBE-SVC-KBDEBIL6IU6WL7RF (1 references)
         pkts bytes target     prot opt in     out     source               destination
            0     0 KUBE-SEP-H37XIVQWZO52OMNP  all  --  *      *       0.0.0.0/0            0.0.0.0/0            /* default/svc-clusterip:svc-webport -> 192.168.3.13:8080 */

      • (추가 테스트) 만약 파드 갯수를 1개로 줄여서 같은 AZ에 목적지 파드가 없을 경우?
        # 파드 갯수를 1개로 줄이기
        kubectl scale deployment deploy-echo --replicas 1
        # 동일 AZ일 경우 0 -> 1 시도
        kubectl scale deployment deploy-echo --replicas 0
        kubectl scale deployment deploy-echo --replicas 1
        
        # 파드 AZ 확인 : 아래 처럼 현재 다른 AZ에 배포
        kubectl get pod -owide
        NAME                           READY   STATUS    RESTARTS   AGE   IP              NODE                                               NOMINATED NODE   READINESS GATES
        deploy-echo-7f67d598dc-h9vst   1/1     Running   0          18m   192.168.2.65    ip-192-168-2-248.ap-northeast-2.compute.internal   <none>           <none>
        netshoot-pod                   1/1     Running   0          66m   192.168.1.137   ip-192-168-1-225.ap-northeast-2.compute.internal   <none>           <none>
        
        # 100번 반복 접속 : 다른 AZ이지만 목적지파드로 접속됨!
        kubectl exec -it netshoot-pod -- zsh -c "for i in {1..100}; do curl -s svc-clusterip | grep Hostname; done | sort | uniq -c | sort -nr"
          100 Hostname: deploy-echo-7f67d598dc-h9vst
        
        
        ssh ec2-user@$N1 sudo iptables -v --numeric --table nat --list KUBE-SERVICES
        
        # 아래 3개 노드 모두 SVC에 1개의 SEP 정책 존재
        ssh ec2-user@$N1 sudo iptables -v --numeric --table nat --list KUBE-SVC-KBDEBIL6IU6WL7RF
        Chain KUBE-SVC-KBDEBIL6IU6WL7RF (1 references)
         pkts bytes target     prot opt in     out     source               destination
          100  6000 KUBE-SEP-XFCOE5ZRIDUONHHN  all  --  *      *       0.0.0.0/0            0.0.0.0/0            /* default/svc-clusterip:svc-webport -> 192.168.2.65:8080 */
        
        ssh ec2-user@$N2 sudo iptables -v --numeric --table nat --list KUBE-SVC-KBDEBIL6IU6WL7RF
        Chain KUBE-SVC-KBDEBIL6IU6WL7RF (1 references)
         pkts bytes target     prot opt in     out     source               destination
            0     0 KUBE-SEP-XFCOE5ZRIDUONHHN  all  --  *      *       0.0.0.0/0            0.0.0.0/0            /* default/svc-clusterip:svc-webport -> 192.168.2.65:8080 */
        
        ssh ec2-user@$N3 sudo iptables -v --numeric --table nat --list KUBE-SVC-KBDEBIL6IU6WL7RF
        Chain KUBE-SVC-KBDEBIL6IU6WL7RF (1 references)
         pkts bytes target     prot opt in     out     source               destination
            0     0 KUBE-SEP-XFCOE5ZRIDUONHHN  all  --  *      *       0.0.0.0/0            0.0.0.0/0            /* default/svc-clusterip:svc-webport -> 192.168.2.65:8080 */
        
        # endpointslices 확인 : hint 정보 없음
        kubectl get endpointslices -l kubernetes.io/service-name=svc-clusterip -o yaml
      • (참고) Topology Aware Hint 설정 제거
        kubectl annotate service svc-clusterip "service.kubernetes.io/topology-mode-"
      • 실습 리소스 삭제: kubectl delete deploy deploy-echo; kubectl delete svc svc-clusterip
 

Kubernetes 네트워크 정책을 통해 pod 트래픽 제한 - Amazon EKS

이 페이지에 작업이 필요하다는 점을 알려 주셔서 감사합니다. 실망시켜 드려 죄송합니다. 잠깐 시간을 내어 설명서를 향상시킬 수 있는 방법에 대해 말씀해 주십시오.

docs.aws.amazon.com

 

 

 

12. Network Policies with VPC CNI

 

  • 동작 : eBPF로 패킷 필터링 동작 - Network Policy Controller, Node Agent, eBPF SDK
    • 사전 조건 : EKS 1.25 버전 이상, AWS VPC CNI 1.14 이상, OS 커널 5.10 이상 EKS 최적화 AMI(AL2, Bottlerocket, Ubuntu)
    • Network Policy Controller : v1.25 EKS 버전 이상 자동 설치, 통제 정책 모니터링 후 eBPF 프로그램을 생성 및 업데이트하도록 Node Agent에 지시
    • Node Agent : AWS VPC CNI 번들로 ipamd 플러그인과 함께 설치됨(aws-node 데몬셋). eBPF 프래그램을 관리
    • eBPF SDK : AWS VPC CNI에는 노드에서 eBPF 프로그램과 상호 작용할 수 있는 SDK 포함, eBPF 실행의 런타임 검사, 추적 및 분석 가능
  • 사전 준비 및 기본 정보 확인
    # Network Policy 기본 비활성화되어 있어, 활성화 필요 : 실습 환경은 미리 활성화 설정 추가되어 있음
    tail -n 11 myeks.yaml
    addons: 
    - name: vpc-cni # no version is specified so it deploys the default version
      version: latest # auto discovers the latest available
      attachPolicyARNs: 
        - arn:aws:iam::aws:policy/AmazonEKS_CNI_Policy
      configurationValues: |-
        enableNetworkPolicy: "true"
    
    # Node Agent 확인 : AWS VPC CNI 1.14 이상 버전 정보 확인
    kubectl get ds aws-node -n kube-system -o yaml | k neat
    ...
        - args: 
          - --enable-ipv6=false
          - --enable-network-policy=true
    ...
        volumeMounts: 
        - mountPath: /host/opt/cni/bin
          name: cni-bin-dir
        - mountPath: /sys/fs/bpf
          name: bpf-pin-path
        - mountPath: /var/log/aws-routed-eni
          name: log-dir
        - mountPath: /var/run/aws-node
          name: run-dir
    ...
    
    
    kubectl get ds aws-node -n kube-system -o yaml | grep -i image:
    kubectl get pod -n kube-system -l k8s-app=aws-node
    kubectl get ds -n kube-system aws-node -o jsonpath='{.spec.template.spec.containers[*].name}{"\n"}'
    aws-node aws-eks-nodeagent
    
    # EKS 1.25 버전 이상 확인
    kubectl get nod
    
    # OS 커널 5.10 이상 확인
    ssh ec2-user@$N1 uname -r
    5.10.210-201.852.amzn2.x86_64
    
    # 실행 중인 eBPF 프로그램 확인
    for i in $N1 $N2 $N3; do echo ">> node $i <<"; ssh ec2-user@$i sudo /opt/cni/bin/aws-eks-na-cli ebpf progs; echo; done
    ...
    Programs currently loaded : 
    Type : 26 ID : 6 Associated maps count : 1
    ========================================================================================
    Type : 26 ID : 8 Associated maps count : 1
    ========================================================================================
    
    # 각 노드에 BPF 파일 시스템을 탑재 확인
    ssh ec2-user@$N1 mount | grep -i bpf
    none on /sys/fs/bpf type bpf (rw,nosuid,nodev,noexec,relatime,mode=700)
    
    ssh ec2-user@$N1 df -a | grep -i bpf
    none                   0       0         0    - /sys/fs/bpf
  • 샘플 애플리케이션 배포 및 네트워크 정책 적용 실습 - Link
    출처 : https://aws.amazon.com/ko/blogs/containers/amazon-vpc-cni-now-supports-kubernetes-network-policies/
    #
    git clone https://github.com/aws-samples/eks-network-policy-examples.git
    cd eks-network-policy-examples
    tree advanced/manifests/
    kubectl apply -f advanced/manifests/
    
    # 확인
    kubectl get pod,svc
    kubectl get pod,svc -n another-ns
    
    # 통신 확인
    kubectl exec -it client-one -- curl demo-app
    kubectl exec -it client-two -- curl demo-app
    kubectl exec -it another-client-one -n another-ns -- curl demo-app
    kubectl exec -it another-client-one -n another-ns -- curl demo-app.default
    kubectl exec -it another-client-two -n another-ns -- curl demo-app.default.svc
    • 모든 트래픽 거부
      # 모니터링
      # kubectl exec -it client-one -- curl demo-app
      while true; do kubectl exec -it client-one -- curl --connect-timeout 1 demo-app ; date; sleep 1; done
      
      # 정책 적용
      cat advanced/policies/01-deny-all-ingress.yaml
      kubectl apply -f advanced/policies/01-deny-all-ingress.yaml
      kubectl get networkpolicy
      
      # 실행 중인 eBPF 프로그램 확인
      for i in $N1 $N2 $N3; do echo ">> node $i <<"; ssh ec2-user@$i sudo /opt/cni/bin/aws-eks-na-cli ebpf progs; echo; done
      for i in $N1 $N2 $N3; do echo ">> node $i <<"; ssh ec2-user@$i sudo /opt/cni/bin/aws-eks-na-cli ebpf loaded-ebpfdata; echo; done
      ...
      >> node 192.168.3.201 <<
      PinPath:  /sys/fs/bpf/globals/aws/programs/demo-app-6fd76f694b-default_handle_ingress
      Pod Identifier : demo-app-6fd76f694b-default  Direction : ingress 
      Prog ID:  9
      Associated Maps -> 
      Map Name:  ingress_map
      Map ID:  7
      Map Name:  policy_events
      Map ID:  6
      Map Name:  aws_conntrack_map
      Map ID:  5
      ========================================================================================
      PinPath:  /sys/fs/bpf/globals/aws/programs/demo-app-6fd76f694b-default_handle_egress
      Pod Identifier : demo-app-6fd76f694b-default  Direction : egress 
      Prog ID:  10
      Associated Maps -> 
      Map Name:  aws_conntrack_map
      Map ID:  5
      Map Name:  egress_map
      Map ID:  8
      Map Name:  policy_events
      Map ID:  6
      ========================================================================================
      
      ssh ec2-user@$N3 sudo /opt/cni/bin/aws-eks-na-cli ebpf dump-maps 5
      ssh ec2-user@$N3 sudo /opt/cni/bin/aws-eks-na-cli ebpf dump-maps 9
      ssh ec2-user@$N3 sudo /opt/cni/bin/aws-eks-na-cli ebpf dump-maps 10
      
      # 정책 다시 삭제
      kubectl delete -f advanced/policies/01-deny-all-ingress.yaml
      for i in $N1 $N2 $N3; do echo ">> node $i <<"; ssh ec2-user@$i sudo /opt/cni/bin/aws-eks-na-cli ebpf loaded-ebpfdata; echo; done
      
      # 다시 적용
      kubectl apply -f advanced/policies/01-deny-all-ingress.yaml
    • 동일 네임스페이스 + 클라이언트1 로부터의 수신 허용
      #
      cat advanced/policies/03-allow-ingress-from-samens-client-one.yaml 
      kubectl apply -f advanced/policies/03-allow-ingress-from-samens-client-one.yaml
      kubectl get networkpolicy
      for i in $N1 $N2 $N3; do echo ">> node $i <<"; ssh ec2-user@$i sudo /opt/cni/bin/aws-eks-na-cli ebpf loaded-ebpfdata; echo; done
      ssh ec2-user@$N3 sudo /opt/cni/bin/aws-eks-na-cli ebpf dump-maps 5
      ssh ec2-user@$N3 sudo /opt/cni/bin/aws-eks-na-cli ebpf dump-maps 9
      ssh ec2-user@$N3 sudo /opt/cni/bin/aws-eks-na-cli ebpf dump-maps 10
      
      # 클라이언트2 수신 확인
      kubectl exec -it client-two -- curl --connect-timeout 1 demo-app


    • another-ns 네임스페이스로부터의 수신 허용
      # 모니터링
      # kubectl exec -it another-client-one -n another-ns -- curl --connect-timeout 1 demo-app.default
      while true; do kubectl exec -it another-client-one -n another-ns -- curl --connect-timeout 1 demo-app.default ; date; sleep 1; done
      
      #
      cat advanced/policies/04-allow-ingress-from-xns.yaml
      kubectl apply -f advanced/policies/04-allow-ingress-from-xns.yaml
      kubectl get networkpolicy
      for i in $N1 $N2 $N3; do echo ">> node $i <<"; ssh ec2-user@$i sudo /opt/cni/bin/aws-eks-na-cli ebpf loaded-ebpfdata; echo; done
      ssh ec2-user@$N3 sudo /opt/cni/bin/aws-eks-na-cli ebpf dump-maps 5
      ssh ec2-user@$N3 sudo /opt/cni/bin/aws-eks-na-cli ebpf dump-maps 9
      ssh ec2-user@$N3 sudo /opt/cni/bin/aws-eks-na-cli ebpf dump-maps 10
      
      #
      kubectl exec -it another-client-two -n another-ns -- curl --connect-timeout 1 demo-app.default
    • eBPF 관련 정보 확인
      # 실행 중인 eBPF 프로그램 확인
      for i in $N1 $N2 $N3; do echo ">> node $i <<"; ssh ec2-user@$i sudo /opt/cni/bin/aws-eks-na-cli ebpf progs; echo; done
      
      # eBPF 로그 확인
      for i in $N1 $N2 $N3; do echo ">> node $i <<"; ssh ec2-user@$i sudo cat /var/log/aws-routed-eni/ebpf-sdk.log; echo; done
      for i in $N1 $N2 $N3; do echo ">> node $i <<"; ssh ec2-user@$i sudo cat /var/log/aws-routed-eni/network-policy-agent; echo; done
    • 송신 트래픽 거부 : 기본 네임스페이스의 클라이언트-1 포드에서 모든 송신 격리를 적용
      # 모니터링
      while true; do kubectl exec -it client-one -- curl --connect-timeout 1 google.com ; date; sleep 1; done
      
      #
      cat advanced/policies/06-deny-egress-from-client-one.yaml
      kubectl apply -f advanced/policies/06-deny-egress-from-client-one.yaml
      kubectl get networkpolicy
      for i in $N1 $N2 $N3; do echo ">> node $i <<"; ssh ec2-user@$i sudo /opt/cni/bin/aws-eks-na-cli ebpf loaded-ebpfdata; echo; done
      ssh ec2-user@$N3 sudo /opt/cni/bin/aws-eks-na-cli ebpf dump-maps 5
      ssh ec2-user@$N3 sudo /opt/cni/bin/aws-eks-na-cli ebpf dump-maps 9
      ssh ec2-user@$N3 sudo /opt/cni/bin/aws-eks-na-cli ebpf dump-maps 10
      
      #
      kubectl exec -it client-one -- nslookup demo-app
    • 송신 트래픽 허용 : DNS 트래픽을 포함하여 여러 포트 및 네임스페이스에서의 송신을 허용
      # 모니터링
      while true; do kubectl exec -it client-one -- curl --connect-timeout 1 demo-app ; date; sleep 1; done
      
      #
      cat advanced/policies/08-allow-egress-to-demo-app.yaml | yh
      kubectl apply -f advanced/policies/08-allow-egress-to-demo-app.yaml
      kubectl get networkpolicy
    • 실습 후 리소스 삭제
      kubectl delete networkpolicy --all
      kubectl delete -f advanced/manifests/
(실습 완료 후) 자원 삭제

 

삭제 : 장점(1줄 명령어로 완전 삭제), 단점(삭제 실행과 완료까지 SSH 세션 유지 필요)

eksctl delete cluster --name $CLUSTER_NAME && aws cloudformation delete-stack --stack-name $CLUSTER_NAME