Service & AWS LoadBalancer Controller
Service
- Pod 집합과 같은 어블리케이션에 접근 경로나 Service Discovery 제공
- Pod를 외부 네트워크에 연결하고, pod로의 연결을 로드밸런싱하는 네트워크 오브젝트
- 하나의 Microservice 단위
- 서비스이름.네임스페이스.svc.cluster.local 이라는 FQDN 생성
- 껍데기만 있는 추상적인 객체
Kubernetes의 Pod는 Lifecycle 혹은 어떤 이유에 따라 언제든 재시작이 발생할 수 있습니다.
Computing 측면에서는 이를 방지하기 위해 Deployment가 그룹으로 Pod를 정의된 숫자만큼 보장하고 있습니다.
그렇다면 Computing resource 측면에서 Deployment가 이를 보장해준다면 Networking 측면에서는 Service가 보장합니다.
앞서 이야기한 것과 같이 Pod는 언제든 재시작이 이뤄질 수 있습니다. Pod에 할당된 IP 역시 재생성이되면 바뀔 수 있습니다.
Pod의 IP로 서비스를 연결했다면 Pod가 다시 시작하는 순간 서비스의 장애가 발생할 것입니다.
Service는 configure 내에 설정된 Endpoint에 따라 Pod로 Traffic을 전달합니다.
이런 설정을 통해 아래와 같이 Pod가 종료되어도 Service는 새로운 Pod로 Traffic을 전달하기 때문에 서비스 연속성이 이어집니다.
물론 이 과정에서는 Graceful Shutdown과 같은 세부 설정은 필요하지만 Flow는 크게 다르지 않습니다.
Cluster IP
- K8S 클러스터 내부에서만 통신이 가능한 Internal network 가상 IP 할당
- service - pod 간 통신은 kube-proxy가 담당
- 서비스 디버깅이나 테스트 시 사용
- 백엔드 앱에서 많이 사용함

NodePort
- NAT를 이용해 클러스터 내 Node의 고정된 port를 갖는 IP로 service 노출
- 외부 트래픽을 서비스에 전달하는 가장 기본적인 방법
- 클러스터 외부에서 접근은: <nodeIP>:<nodePort>
- Port range : 30000-32767
- node port를 지정할 수 있지만 지정하지 않으면 range 내에서 random 할
- Client가 접근 시 포트를 붙여야 하는 불편함이 있어 일반적으로 사용하지 않고, API등 코드에서 Exact한 서비스에 주로 사용함

LoadBalancer
- NAT 및 클라우드 로드밸런서를 활용하여 서비스 노출
- 외부 트래픽을 자동으로 클러스터 내 서비스로 전달
- 클라우드 벤더(AWS, GCP, Azure)의 L4 Load Balancer(네트워크 LB)를 자동으로 생성
- 클러스터 외부에서 접근 시: <LoadBalancer IP>:<port> 또는 <DNS>:<port>
- NodePort를 자동으로 할당하여 내부에서 서비스로 라우팅
- 클라우드 벤더가 Load Balancer의 DNS를 제공하여 도메인 기반 접근 가능

Service (Loadbalancer Controller)
- AWS Load Balancer Controller + NLB IP 모드 동작 with AWS VPC CNI
AWS Load Balancer Controller는 EKS에서 LB를 자동으로 관리하는 컨트롤러입니다.
주요 기능:
Ingress를 ALB로 자동 프로비저닝
Service type: LoadBalancer를 NLB로 자동 프로비저닝
TargetGroupBinding CRD를 통한 직접적인 ALB/NLB 타겟 그룹 관리
[ 인스턴스 유형 : 노드에 NodePort로 전달 ]

[ IP 유형 > 반드시 AWS LoadBalancer 컨트롤러 파드 및 정책이 필요


AWS LoadBalancer Controller 배포
## 설치 전 CRD 확인
$ k get crd
NAME CREATED AT
cninodes.vpcresources.k8s.aws 2025-02-15T04:18:00Z
eniconfigs.crd.k8s.amazonaws.com 2025-02-15T04:20:38Z
policyendpoints.networking.k8s.aws 2025-02-15T04:18:01Z
securitygrouppolicies.vpcresources.k8s.aws 2025-02-15T04:18:00Z
## Helm Chart 설치
$ helm repo add eks https://aws.github.io/eks-charts
$ helm repo update
$ helm install aws-load-balancer-controller eks/aws-load-balancer-controller -n kube-system --set clusterName=$CLUSTER_NAME
## 설치 확인
$ k get crd
NAME CREATED AT
cninodes.vpcresources.k8s.aws 2025-02-15T04:18:00Z
eniconfigs.crd.k8s.amazonaws.com 2025-02-15T04:20:38Z
ingressclassparams.elbv2.k8s.aws 2025-02-15T18:19:42Z
policyendpoints.networking.k8s.aws 2025-02-15T04:18:01Z
securitygrouppolicies.vpcresources.k8s.aws 2025-02-15T04:18:00Z
targetgroupbindings.elbv2.k8s.aws 2025-02-15T18:19:42Z
## 컨트롤러 상세 확인
$ k describe deploy -n kube-system aws-load-balancer-controller
$ k describe deploy -n kube-system aws-load-balancer-controller | grep 'Service Account'
## 클러스터, 롤 확인
$ k describe clusterrolebindings.rbac.authorization.k8s.io aws-load-balancer-controller-rolebinding
$ k describe clusterroles.rbac.authorization.k8s.io aws-load-balancer-controller-role
PolicyRule:
Resources Non-Resource URLs Resource Names Verbs
--------- ----------------- -------------- -----
targetgroupbindings.elbv2.k8s.aws [] [] [create delete get list patch update watch]
events [] [] [create patch]
configmaps [] [] [get delete create update]
ingresses [] [] [get list patch update watch]
services [] [] [get list patch update watch]
ingresses.extensions [] [] [get list patch update watch]
services.extensions [] [] [get list patch update watch]
ingresses.networking.k8s.io [] [] [get list patch update watch]
services.networking.k8s.io [] [] [get list patch update watch]
endpoints [] [] [get list watch]
namespaces [] [] [get list watch]
nodes [] [] [get list watch]
pods [] [] [get list watch]
endpointslices.discovery.k8s.io [] [] [get list watch]
ingressclassparams.elbv2.k8s.aws [] [] [get list watch]
ingressclasses.networking.k8s.io [] [] [get list watch]
ingresses/status [] [] [update patch]
pods/status [] [] [update patch]
services/status [] [] [update patch]
targetgroupbindings/status [] [] [update patch]
ingresses.elbv2.k8s.aws/status [] [] [update patch]
pods.elbv2.k8s.aws/status [] [] [update patch]
services.elbv2.k8s.aws/status [] [] [update patch]
targetgroupbindings.elbv2.k8s.aws/status [] [] [update patch]
ingresses.extensions/status [] [] [update patch]
pods.extensions/status [] [] [update patch]
services.extensions/status [] [] [update patch]
targetgroupbindings.extensions/status [] [] [update patch]
ingresses.networking.k8s.io/status [] [] [update patch]
pods.networking.k8s.io/status [] [] [update patch]
services.networking.k8s.io/status [] [] [update patch]
targetgroupbindings.networking.k8s.io/status [] [] [update patch]
서비스/파드 배포 테스트 with NLB
## 모니터링
$ k get svc,pod,ep,endpointslices
## Deploy 배포
cat << EOF > echo-service-nlb.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: deploy-echo
spec:
replicas: 2
selector:
matchLabels:
app: deploy-websrv
template:
metadata:
labels:
app: deploy-websrv
spec:
terminationGracePeriodSeconds: 0
containers:
- name: aews-websrv
image: k8s.gcr.io/echoserver:1.5
ports:
- containerPort: 8080
---
apiVersion: v1
kind: Service
metadata:
name: svc-nlb-ip-type
annotations:
service.beta.kubernetes.io/aws-load-balancer-nlb-target-type: ip
service.beta.kubernetes.io/aws-load-balancer-scheme: internet-facing
service.beta.kubernetes.io/aws-load-balancer-healthcheck-port: "8080"
service.beta.kubernetes.io/aws-load-balancer-cross-zone-load-balancing-enabled: "true"
spec:
ports:
- port: 80
targetPort: 8080
protocol: TCP
type: LoadBalancer
loadBalancerClass: service.k8s.aws/nlb
selector:
app: deploy-websrv
EOF
$ k apply -f echo-service-nlb.yaml
NAME READY STATUS RESTARTS AGE
pod/deploy-echo-bf9bdb8bc-g9vnh 1/1 Running 0 16s
pod/deploy-echo-bf9bdb8bc-xn4zt 1/1 Running 0 16s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S
) AGE
service/kubernetes ClusterIP 10.100.0.1 <none> 443/TC
P 14h
service/svc-nlb-ip-type LoadBalancer 10.100.55.220 k8s-default-svcnlbip-35426ad505-63cf188be6cffd3a.elb.ap-northeast-2.amazonaws.com 80:308
11/TCP 16s
NAME ENDPOINTS AGE
endpoints/kubernetes 192.168.2.29:443,192.168.3.56:443 14h
endpoints/svc-nlb-ip-type 192.168.1.214:8080,192.168.2.150:8080 16s
생성 확인
$ aws elbv2 describe-load-balancers --query 'LoadBalancers[*].State.Code' --output text
$ k get targetgroupbindings
NAME SERVICE-NAME SERVICE-PORT TARGET-TYPE AGE
targetgroupbinding.elbv2.k8s.aws/k8s-default-svcnlbip-3316bbc1f0 svc-nlb-ip-type 80 ip 2m25s
$ k get targetgroupbindings -o json | jq
"finalizers": [
"elbv2.k8s.aws/resources"
],
"generation": 1,
"labels": {
"service.k8s.aws/stack-name": "svc-nlb-ip-type",
"service.k8s.aws/stack-namespace": "default"
},
"name": "k8s-default-svcnlbip-3316bbc1f0",
"namespace": "default",
"resourceVersion": "172045",
"uid": "66625816-88d1-4cf4-a16a-cac6c6aea2a1"
},
"spec": {
"ipAddressType": "ipv4",
"networking": {
"ingress": [
{
"from": [
{
"securityGroup": {
"groupID": "sg-0dd52ebfc60d7b3ac"
}
}
],
"ports": [
{
"port": 8080,
"protocol": "TCP"
}
]
}
]
},
"serviceRef": {
"name": "svc-nlb-ip-type",
"port": 80

## 웹 주소 확인
$ kubectl get svc svc-nlb-ip-type -o jsonpath='{.status.loadBalancer.ingress[0].hostname}' | awk '{ print "Pod Web URL = http://"$1 }'
Pod Web URL = http://k8s-default-svcnlbip-35426ad505-63cf188be6cffd3a.elb.ap-northeast-2.amazonaws.com

## Pod 로그 확인
$ k stern -l app=deploy-websrv
+ deploy-echo-bf9bdb8bc-g9vnh › aews-websrv
+ deploy-echo-bf9bdb8bc-xn4zt › aews-websrv
deploy-echo-bf9bdb8bc-xn4zt aews-websrv 192.168.3.30 - - [15/Feb/2025:18:30:58 +0000] "GET / HTTP/1.1" 200 1088 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/132.0.0.0 Safari/537.36"
deploy-echo-bf9bdb8bc-xn4zt aews-websrv 192.168.3.30 - - [15/Feb/2025:18:30:58 +0000] "GET /favicon.ico HTTP/1.1" 200 1110 "http://k8s-default-svcnlbip-35426ad505-63cf188be6cffd3a.elb.ap-northeast-2.amazonaws.com/" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/132.0.0.0 Safari/537.36"
deploy-echo-bf9bdb8bc-xn4zt aews-websrv 192.168.3.30 - - [15/Feb/2025:18:32:03 +0000] "GET / HTTP/1.1" 200 1119 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/132.0.0.0 Safari/537.36"
deploy-echo-bf9bdb8bc-xn4zt aews-websrv 192.168.3.30 - - [15/Feb/2025:18:32:03 +0000] "GET /favicon.ico HTTP/1.1" 200 1110 "http://k8s-default-svcnlbip-35426ad505-63cf188be6cffd3a.elb.ap-northeast-2.amazonaws.com/" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/132.0.0.0 Safari/537.36"
deploy-echo-bf9bdb8bc-xn4zt aews-websrv 192.168.3.30 - - [15/Feb/2025:18:32:03 +0000] "GET / HTTP/1.1" 200 1119 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/132.0.0.0 Safari/537.36"
deploy-echo-bf9bdb8bc-xn4zt aews-websrv 192.168.3.30 - - [15/Feb/2025:18:32:03 +0000] "GET /favicon.ico HTTP/1.1" 200 1110 "http://k8s-default-svcnlbip-35426ad505-63cf188be6cffd3a.elb.ap-northeast-2.amazonaws.com/" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/132.0.0.0 Safari/537.36"
deploy-echo-bf9bdb8bc-g9vnh aews-websrv 192.168.2.85 - - [15/Feb/2025:18:32:03 +0000] "GET / HTTP/1.1" 200 1119 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/132.0.0.0 Safari/537.36"
deploy-echo-bf9bdb8bc-g9vnh aews-websrv 192.168.2.85 - - [15/Feb/2025:18:32:04 +0000] "GET /favicon.ico HTTP/1.1" 200 1110 "http://k8s-default-svcnlbip-35426ad505-63cf188be6cffd3a.elb.ap-northeast-2.amazonaws.com/" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/132.0.0.0 Safari/537.36"
분산 접속 확인
$ NLB=$(kubectl get svc svc-nlb-ip-type -o jsonpath='{.status.loadBalancer.ingress[0].hostname}')
$ curl -s $NLB
Hostname: deploy-echo-bf9bdb8bc-g9vnh
Pod Information:
-no pod information available-
Server values:
server_version=nginx: 1.13.0 - lua: 10008
Request Information:
client_address=192.168.2.85
method=GET
real path=/
query=
request_version=1.1
request_uri=http://k8s-default-svcnlbip-35426ad505-63cf188be6cffd3a.elb.ap-northeast-2.amazonaws.com:8080/
Request Headers:
accept=*/*
host=k8s-default-svcnlbip-35426ad505-63cf188be6cffd3a.elb.ap-northeast-2.amazonaws.com
user-agent=curl/8.7.1
Request Body:
-no body in request-
## 100번 접속하여 분산 상태 확인
$ for i in {1..100}; do curl -s $NLB | grep Hostname ; done | sort | uniq -c | sort -nr
52 Hostname: deploy-echo-bf9bdb8bc-g9vnh
48 Hostname: deploy-echo-bf9bdb8bc-xn4zt
## 반복문을 통해서도 확인 가능
$ while true; do curl -s --connect-timeout 1 $NLB | egrep 'Hostname|client_address'; echo "----------" ; date "+%Y-%m-%d %H:%M:%S" ; sleep 1; done
----------
2025-02-16 03:34:12
Hostname: deploy-echo-bf9bdb8bc-xn4zt
client_address=192.168.3.30
----------
2025-02-16 03:34:13
Hostname: deploy-echo-bf9bdb8bc-xn4zt
client_address=192.168.3.30
----------
2025-02-16 03:34:14
Hostname: deploy-echo-bf9bdb8bc-g9vnh
client_address=192.168.3.30
----------
2025-02-16 03:34:15
Hostname: deploy-echo-bf9bdb8bc-xn4zt
client_address=192.168.3.30
----------
2025-02-16 03:34:17
Hostname: deploy-echo-bf9bdb8bc-xn4zt
client_address=192.168.3.30
----------
2025-02-16 03:34:18
Hostname: deploy-echo-bf9bdb8bc-g9vnh
client_address=192.168.3.30
Ingress
인그레스는 Proxy 기반으로 클러스터 내부의 서비스를 외부로 노출합니다.
ingress.yaml 배포
apiVersion: v1
kind: Namespace
metadata:
name: game-2048
---
apiVersion: apps/v1
kind: Deployment
metadata:
namespace: game-2048
name: deployment-2048
spec:
selector:
matchLabels:
app.kubernetes.io/name: app-2048
replicas: 2
template:
metadata:
labels:
app.kubernetes.io/name: app-2048
spec:
containers:
- image: public.ecr.aws/l6m2t8p7/docker-2048:latest
imagePullPolicy: Always
name: app-2048
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
namespace: game-2048
name: service-2048
spec:
ports:
- port: 80
targetPort: 80
protocol: TCP
type: NodePort
selector:
app.kubernetes.io/name: app-2048
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
namespace: game-2048
name: ingress-2048
annotations:
alb.ingress.kubernetes.io/scheme: internet-facing
alb.ingress.kubernetes.io/target-type: ip
spec:
ingressClassName: alb
rules:
- http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: service-2048
port:
number: 80
## 오브젝트 배포
$ k apply -f ingress1.yaml
## 자원 생성 모니터링
$ watch -d kubectl get pod,ingress,svc,ep,endpointslices -n game-2048
NAME READY STATUS RESTARTS AGE
pod/deployment-2048-7df5f9886b-5smz4 1/1 Running 0 15s
pod/deployment-2048-7df5f9886b-mb8fx 1/1 Running 0 15s
NAME CLASS HOSTS ADDRESS PORTS AGE
ingress.networking.k8s.io/ingress-2048 alb * k8s-game2048-ingress2-70d50ce3fd-450104156.ap-northeast-2.elb.amazonaws.com 80 16s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/service-2048 NodePort 10.100.58.47 <none> 80:30513/TCP 16s
NAME ENDPOINTS AGE
endpoints/service-2048 192.168.1.32:80,192.168.2.47:80 16s
NAME ADDRESSTYPE PORTS ENDPOINTS AGE
endpointslice.discovery.k8s.io/service-2048-8xzh6 IPv4 80 192.168.1.32,192.168.2.47 16s
## ALB 생성 확인
$ aws elbv2 describe-load-balancers --query 'LoadBalancers[?contains(LoadBalancerName, `k8s-game2048`) == `true`]' | jq
"DNSName": "k8s-game2048-ingress2-70d50ce3fd-450104156.ap-northeast-2.elb.amazonaws.com",
"CanonicalHostedZoneId": "ZWKZPGTI48KDX",
"CreatedTime": "2025-02-15T18:42:36.466000+00:00",
"LoadBalancerName": "k8s-game2048-ingress2-70d50ce3fd",
"Scheme": "internet-facing",
"VpcId": "vpc-0ece9237619f5bf51",
"State": {
"Code": "provisioning"
},
$ ALB_ARN=$(aws elbv2 describe-load-balancers --query 'LoadBalancers[?contains(LoadBalancerName, `k8s-game2048`) == `true`].LoadBalancerArn' | jq -r '.[0]')
$ aws elbv2 describe-target-groups --load-balancer-arn $ALB_ARN
$ TARGET_GROUP_ARN=$(aws elbv2 describe-target-groups --load-balancer-arn $ALB_ARN | jq -r '.TargetGroups[0].TargetGroupArn')
$ aws elbv2 describe-target-health --target-group-arn $TARGET_GROUP_ARN | jq
{
"TargetHealthDescriptions": [
{
"Target": {
"Id": "192.168.1.32",
"Port": 80,
"AvailabilityZone": "ap-northeast-2a"
},
"HealthCheckPort": "80",
"TargetHealth": {
"State": "initial",
"Reason": "Elb.RegistrationInProgress",
"Description": "Target registration is in progress"
}
},
{
"Target": {
"Id": "192.168.2.47",
"Port": 80,
"AvailabilityZone": "ap-northeast-2b"
},
"HealthCheckPort": "80",
"TargetHealth": {
"State": "initial",
"Reason": "Elb.RegistrationInProgress",
"Description": "Target registration is in progress"
}
}
]
}
게임 접속
$ k get ingress -n game-2048 ingress-2048 -o jsonpath='{.status.loadBalancer.ingress[0].hostname}' | awk '{ print "Game URL = http://"$1 }'
Game URL = http://k8s-game2048-ingress2-70d50ce3fd-450104156.ap-northeast-2.elb.amazonaws.com

AWS 콘솔의 리소스맵 확인
- ALB에서 POD IP로 직접 전달

$ k get pod -o wide -n game-2048
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
deployment-2048-7df5f9886b-5smz4 1/1 Running 0 5m34s 192.168.2.47 ip-192-168-2-38.ap-northeast-2.compute.internal <none> <none>
deployment-2048-7df5f9886b-mb8fx 1/1 Running 0 5m34s 192.168.1.32 ip-192-168-1-172.ap-northeast-2.compute.internal <none> <none>
- POD 개수 변경
## 모니터링
# [터미널 1]
$ watch kubectl get pod -n game-2048
# [터미널 2]
$ while true; do aws elbv2 describe-target-health --target-group-arn $TARGET_GROUP_ARN --output text; echo; done
## 파드 추가
$ k scale deploy -n game-2048 deployment-2048 --replicas 3
NAME READY STATUS RESTARTS AGE
deployment-2048-7df5f9886b-5smz4 1/1 Running 0 8m29s
deployment-2048-7df5f9886b-mb8fx 1/1 Running 0 8m29s
deployment-2048-7df5f9886b-mq9gm 1/1 Running 0 51s
TARGETHEALTHDESCRIPTIONS 80
TARGET ap-northeast-2a 192.168.1.32 80
TARGETHEALTH healthy
TARGETHEALTHDESCRIPTIONS 80
TARGET ap-northeast-2b 192.168.2.47 80
TARGETHEALTH healthy
TARGETHEALTHDESCRIPTIONS 80
TARGET ap-northeast-2c 192.168.3.71 80
TARGETHEALTH healthy
## 파드 감소
$ k scale deploy -n game-2048 deployment-2048 --replicas 1
NAME READY STATUS RESTARTS AGE
deployment-2048-7df5f9886b-5smz4 1/1 Running 0 9m25s

ExternalDNS
ExternalDNS는 서비스 / 인그레스 생성 시 도메인을 설정하면, AWS, Azure, GCP에 A레코드로 자동 생성 / 삭제됩니다.
MyDomain=ssungz.net
## 도메인 ID 조회 및 변수 지정
$ aws route53 list-hosted-zones-by-name --dns-name "${MyDomain}." | jq
$ aws route53 list-hosted-zones-by-name --dns-name "${MyDomain}." --query "HostedZones[0].Name"
$ aws route53 list-hosted-zones-by-name --dns-name "${MyDomain}." --query "HostedZones[0].Id" --output text
$ MyDnzHostedZoneId=`aws route53 list-hosted-zones-by-name --dns-name "${MyDomain}." --query "HostedZones[0].Id" --output text`
$ echo $MyDnzHostedZoneId
## A 레코드 조회
$ aws route53 list-resource-record-sets --output json --hosted-zone-id "${MyDnzHostedZoneId}" --query "ResourceRecordSets[?Type == 'A']"
ExternalDNS 설치
## externaldns 배포
$ curl -s -O https://raw.githubusercontent.com/gasida/PKOS/main/aews/externaldns.yaml
$ cat externaldns.yaml
$ MyDomain=$MyDomain MyDnzHostedZoneId=$MyDnzHostedZoneId envsubst < externaldns.yaml | kubectl apply -f -
## 설치 확인 및 모니터링
$ k get pod -l app.kubernetes.io/name=external-dns -n kube-system
NAME READY STATUS RESTARTS AGE
external-dns-7bbfd5c74d-g5klv 1/1 Running 0 17s
Service(NLB) + 도메인 연동(ExternalDNS)
## 모니터링
$ k stern -l app.kubernetes.io/name=external-dns -n kube-system
## 테트리스 게임 배포
cat <<EOF | kubectl apply -f -
apiVersion: apps/v1
kind: Deployment
metadata:
name: tetris
labels:
app: tetris
spec:
replicas: 1
selector:
matchLabels:
app: tetris
template:
metadata:
labels:
app: tetris
spec:
containers:
- name: tetris
image: bsord/tetris
---
apiVersion: v1
kind: Service
metadata:
name: tetris
annotations:
service.beta.kubernetes.io/aws-load-balancer-nlb-target-type: ip
service.beta.kubernetes.io/aws-load-balancer-scheme: internet-facing
service.beta.kubernetes.io/aws-load-balancer-cross-zone-load-balancing-enabled: "true"
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: "http"
#service.beta.kubernetes.io/aws-load-balancer-healthcheck-port: "80"
spec:
selector:
app: tetris
ports:
- port: 80
protocol: TCP
targetPort: 80
type: LoadBalancer
loadBalancerClass: service.k8s.aws/nlb
EOF
## 배포 확인
$ k get deploy,svc,ep tetris
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/tetris 0/1 1 0 5s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/tetris LoadBalancer 10.100.210.179 k8s-default-tetris-3858665339-d4b48c95915a074c.elb.ap-northeast-2.amazonaws.com 80:31604/TCP 5s
NAME ENDPOINTS AGE
endpoints/tetris <none> 5s
## NLB에 ExternalDNS 도메인 연결
$ k annotate service tetris "external-dns.alpha.kubernetes.io/hostname=tetris.$MyDomain"
$ k describe svc/tetris
Name: tetris
Namespace: default
Labels: <none>
Annotations: external-dns.alpha.kubernetes.io/hostname: tetris.ssungz.net
## Route53에 A레코드 확인
$ aws route53 list-resource-record-sets --hosted-zone-id "${MyDnzHostedZoneId}" --query "ResourceRecordSets[?Type == 'A']" | jq
"ResourceRecords": [
{
"Value": "175.106.96.101"
}
]
},
{
"Name": "tetris.ssungz.net.",
"Type": "A",
"AliasTarget": {
"HostedZoneId": "ZIBE1TIR4HY56",
## dig로 도메인 확인
$ dig +short tetris.$MyDomain
15.164.202.39
## 도메인 체크
$ echo -e "My Domain Checker Site1 = https://www.whatsmydns.net/#A/tetris.$MyDomain"
$ echo -e "My Domain Checker Site2 = https://dnschecker.org/#A/tetris.$MyDomain"
사이트 접속 확인

Topology Aware Routing
Kubernetes에서 네트워크 트래픽을 클러스터 내에서 최적의 경로로 라우팅하며, Latency 및 Cost 절감을 위해 만들어짐
https://kubernetes.io/docs/concepts/services-networking/topology-aware-routing/
Topology Aware Routing
_Topology Aware Routing_ provides a mechanism to help keep network traffic within the zone where it originated. Preferring same-zone traffic between Pods in your cluster can help with reliability, performance (network latency and throughput), or cost.
kubernetes.io
## 각 노드의 배포 AZ 확인
$ k get node --label-columns=topology.kubernetes.io/zone
NAME STATUS ROLES AGE VERSION ZONE
ip-192-168-1-172.ap-northeast-2.compute.internal Ready <none> 15h v1.31.5-eks-5d632ec ap-northeast-2a
ip-192-168-2-38.ap-northeast-2.compute.internal Ready <none> 15h v1.31.5-eks-5d632ec ap-northeast-2b
ip-192-168-3-98.ap-northeast-2.compute.internal Ready <none> 15h v1.31.5-eks-5d632ec ap-northeast-2c
## 테스트 리소스 배포
cat <<EOF | kubectl apply -f -
apiVersion: apps/v1
kind: Deployment
metadata:
name: deploy-echo
spec:
replicas: 3
selector:
matchLabels:
app: deploy-websrv
template:
metadata:
labels:
app: deploy-websrv
spec:
terminationGracePeriodSeconds: 0
containers:
- name: websrv
image: registry.k8s.io/echoserver:1.5
ports:
- containerPort: 8080
---
apiVersion: v1
kind: Service
metadata:
name: svc-clusterip
spec:
ports:
- name: svc-webport
port: 80
targetPort: 8080
selector:
app: deploy-websrv
type: ClusterIP
EOF
## 배포 자원 확인
$ k get deploy,svc,ep,endpointslices
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/deploy-echo 3/3 3 3 15s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kubernetes ClusterIP 10.100.0.1 <none> 443/TCP 15h
service/svc-clusterip ClusterIP 10.100.141.22 <none> 80/TCP 15s
NAME ENDPOINTS AGE
endpoints/kubernetes 192.168.2.29:443,192.168.3.56:443 15h
endpoints/svc-clusterip 192.168.1.71:8080,192.168.2.150:8080,192.168.3.82:8080 15s
NAME ADDRESSTYPE PORTS ENDPOINTS AGE
endpointslice.discovery.k8s.io/kubernetes IPv4 443 192.168.2.29,192.168.3.56 15h
endpointslice.discovery.k8s.io/svc-clusterip-c4tbk IPv4 8080 192.168.2.150,192.168.1.71,192.168.3.82 15s
## 라벨 자원 조회
$ k get endpointslices -l kubernetes.io/service-name=svc-clusterip
NAME ADDRESSTYPE PORTS ENDPOINTS AGE
svc-clusterip-c4tbk IPv4 8080 192.168.2.150,192.168.1.71,192.168.3.82 46s
## 접속 테스트를 수행할 클라이언트 파드 배포
cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: Pod
metadata:
name: netshoot-pod
spec:
containers:
- name: netshoot-pod
image: nicolaka/netshoot
command: ["tail"]
args: ["-f", "/dev/null"]
terminationGracePeriodSeconds: 0
EOF
## 리소스 확인
$ k get pod -owide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
deploy-echo-75b7b9558c-kkr5n 1/1 Running 0 113s 192.168.3.82 ip-192-168-3-98.ap-northeast-2.compute.internal <none> <none>
deploy-echo-75b7b9558c-rgg8l 1/1 Running 0 113s 192.168.1.71 ip-192-168-1-172.ap-northeast-2.compute.internal <none> <none>
deploy-echo-75b7b9558c-z8qrb 1/1 Running 0 113s 192.168.2.150 ip-192-168-2-38.ap-northeast-2.compute.internal <none> <none>
netshoot-pod 1/1 Running 0 6s 192.168.2.103 ip-192-168-2-38.ap-northeast-2.compute.internal <none> <none>
- netshoot pod에서 clusterip 접속 시 부하분산 확인
## deploy 파드의 노드 확인
$ k get pod -l app=deploy=websrv -o wide
## 테스트 파드에서 clusterip 접속 시 부하분산 확인
$ k exec -it netshoot-pod -- curl svc-clusterip | grep Hostname

## 100번 접속 시 분산 확인
$ k exec -it netshoot-pod -- zsh -c "for i in {1..100}; do curl -s svc-clusterip | grep Hostname; done | sort | uniq -c | sort -nr"
38 Hostname: deploy-echo-75b7b9558c-rgg8l
36 Hostname: deploy-echo-75b7b9558c-kkr5n
26 Hostname: deploy-echo-75b7b9558c-z8qrb
## iptables 확인
$ ssh ec2-user@$N1 sudo iptables -t nat -nvL
Chain KUBE-SVC-KBDEBIL6IU6WL7RF (1 references)
pkts bytes target prot opt in out source destination
0 0 KUBE-SEP-JXFW2Q4VP526YNPC all -- * * 0.0.0.0/0 0.0.0.0/0 /* default/svc-clusterip:svc-webport -> 192.168.1.71:8080 */ statistic mode random probability 0.33333333349
0 0 KUBE-SEP-ZAJPLYNAVVFPWMJH all -- * * 0.0.0.0/0 0.0.0.0/0 /* default/svc-clusterip:svc-webport -> 192.168.2.150:8080 */ statistic mode random probability 0.50000000000
0 0 KUBE-SEP-SVDPZWNEMP7LHDXJ all -- * * 0.0.0.0/0 0.0.0.0/0 /* default/svc-clusterip:svc-webport -> 192.168.3.82:8080 */
위에 iptables 정책에 의해 random으로 3개 파드로 부사 분산이 되는 것을 확인할 수 있습니다.
랜덤 확률이 코멘트로 작성되어 있긴 하지만 실제 100번 정도 접속했을 때는 코멘트와 달리 나름 고루 분배되는 것을 확인했습니다.
Target에 대한 정책을 확인해보면 각각 노드에 속한 파드의 IP가 출발지로 설정되어 있는 것을 확인 할 수 있습니다.
$ ssh ec2-user@$N1 sudo iptables -v --numeric --table nat --list KUBE-SEP-JXFW2Q4VP526YNPC
Chain KUBE-SEP-JXFW2Q4VP526YNPC (1 references)
pkts bytes target prot opt in out source destination
0 0 KUBE-MARK-MASQ all -- * * 192.168.1.71 0.0.0.0/0 /* default/svc-clusterip:svc-webport */
0 0 DNAT tcp -- * * 0.0.0.0/0 0.0.0.0/0 /* default/svc-clusterip:svc-webport */ tcp to:192.168.1.71:8080
## 파드 IP
$ k get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
deploy-echo-75b7b9558c-kkr5n 1/1 Running 0 17m 192.168.3.82 ip-192-168-3-98.ap-northeast-2.compute.internal <none> <none>
deploy-echo-75b7b9558c-rgg8l 1/1 Running 0 17m 192.168.1.71 ip-192-168-1-172.ap-northeast-2.compute.internal <none> <none>
deploy-echo-75b7b9558c-z8qrb 1/1 Running 0 17m 192.168.2.150 ip-192-168-2-38.ap-northeast-2.compute.internal <none> <none>
netshoot-pod 1/1 Running 0 15m 192.168.2.103 ip-192-168-2-38.ap-northeast-2.compute.internal <none> <none>
Topology Mode ( Awage Hint )

## Annotate가 없을 경우에는 hints 라는 블록이 없습니다. (conditions 블록 다음에 생김)
$ k get endpointslices -l kubernetes.io/service-name=svc-clusterip -o yaml
conditions:
ready: true
serving: true
terminating: false
nodeName: ip-192-168-3-98.ap-northeast-2.compute.internal
## Aware Routing 추가를 위해 annotate 추가
$ k annotate service svc-clusterip "service.kubernetes.io/topology-mode=auto"
conditions:
ready: true
serving: true
terminating: false
hints:
forZones:
- name: ap-northeast-2c
nodeName: ip-192-168-3-98.ap-northeast-2.compute.internal
## 각 파드 데이터 하위에 AZ 존 표기가 추가 됩니다.
$ k get endpointslices -l kubernetes.io/service-name=svc-clusterip -o yaml | grep -A1 Zone
forZones:
- name: ap-northeast-2b
--
forZones:
- name: ap-northeast-2a
--
forZones:
- name: ap-northeast-2c
## 100번 접속 해보기
$ k exec -it netshoot-pod -- zsh -c "for i in {1..100}; do curl -s svc-clusterip | grep Hostname; done | sort | uniq -c | sort -nr"
100 Hostname: deploy-echo-75b7b9558c-z8qrb
iptables를 조회하면 이전에는 3개의 ip가 모두 출력됐던 것과 달리 같은 AZ에 있는 pod만 출력되는 것을 확인할 수 있습니다.
Chain KUBE-SVC-KBDEBIL6IU6WL7RF (1 references)
pkts bytes target prot opt in out source destination
0 0 KUBE-SEP-SVDPZWNEMP7LHDXJ all -- * * 0.0.0.0/0 0.0.0.0/0 /* default/svc-clusterip:svc-webport -> 192.168.3.82:8080 */
POD를 한개로 줄이고 node가 다른 pod에서 접속이 실패할 것 같았는데 실제 테스트 시 정상적으로 접속되는 것을 확인했습니다.
POD가 1개일 때 iptables의 정책은 현재 생성되어 있는 1개 pod를 목적지로 수정되어, AZ가 달라도 통신이 되는 것을 확인할 수 있습니다.
$ k get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
deploy-echo-75b7b9558c-kkr5n 1/1 Running 0 33m 192.168.3.82 ip-192-168-3-98.ap-northeast-2.compute.internal <none> <none>
netshoot-pod 1/1 Running 0 31m 192.168.2.103 ip-192-168-2-38.ap-northeast-2.compute.internal <none> <none>
$ k exec -it netshoot-pod -- zsh -c "for i in {1..100}; do curl -s svc-clusterip | grep Hostname; done | sort | uniq -c | sort -nr"
100 Hostname: deploy-echo-75b7b9558c-kkr5n
## IPTABLES 확인
# [노드 1]
Chain KUBE-SVC-KBDEBIL6IU6WL7RF (1 references)
pkts bytes target prot opt in out source destination
0 0 KUBE-SEP-SVDPZWNEMP7LHDXJ all -- * * 0.0.0.0/0 0.0.0.0/0 /* default/svc-clusterip:svc-webport -> 192.168.3.82:8080 */
# [노드 2]
Chain KUBE-SVC-KBDEBIL6IU6WL7RF (1 references)
pkts bytes target prot opt in out source destination
0 0 KUBE-SEP-SVDPZWNEMP7LHDXJ all -- * * 0.0.0.0/0 0.0.0.0/0 /* default/svc-clusterip:svc-webport -> 192.168.3.82:8080 */
# [노드 3]
Chain KUBE-SVC-KBDEBIL6IU6WL7RF (1 references)
pkts bytes target prot opt in out source destination
100 6000 KUBE-SEP-SVDPZWNEMP7LHDXJ all -- * * 0.0.0.0/0 0.0.0.0/0 /* default/svc-clusterip:svc-webport -> 192.168.3.82:8080 */
이전에 설정했던 hint도 조회 시 삭제되어 있는 것을 확인할 수 있었습니다.
$ k get endpointslices -l kubernetes.io/service-name=svc-clusterip -o yaml | grep -A1 Zone
'Cloud > AWS' 카테고리의 다른 글
[AWS] EKS Observability (1) | 2025.03.02 |
---|---|
[AWS] EKS Storage (0) | 2025.02.23 |
[AWS] EKS Networkings (1) (0) | 2025.02.15 |
[AWS] EKS 설치 및 기본 사용 (3) (0) | 2025.02.08 |
[AWS] EKS 설치 및 기본 사용 (2) (0) | 2025.02.07 |
Service & AWS LoadBalancer Controller
Service
- Pod 집합과 같은 어블리케이션에 접근 경로나 Service Discovery 제공
- Pod를 외부 네트워크에 연결하고, pod로의 연결을 로드밸런싱하는 네트워크 오브젝트
- 하나의 Microservice 단위
- 서비스이름.네임스페이스.svc.cluster.local 이라는 FQDN 생성
- 껍데기만 있는 추상적인 객체
Kubernetes의 Pod는 Lifecycle 혹은 어떤 이유에 따라 언제든 재시작이 발생할 수 있습니다.
Computing 측면에서는 이를 방지하기 위해 Deployment가 그룹으로 Pod를 정의된 숫자만큼 보장하고 있습니다.
그렇다면 Computing resource 측면에서 Deployment가 이를 보장해준다면 Networking 측면에서는 Service가 보장합니다.
앞서 이야기한 것과 같이 Pod는 언제든 재시작이 이뤄질 수 있습니다. Pod에 할당된 IP 역시 재생성이되면 바뀔 수 있습니다.
Pod의 IP로 서비스를 연결했다면 Pod가 다시 시작하는 순간 서비스의 장애가 발생할 것입니다.
Service는 configure 내에 설정된 Endpoint에 따라 Pod로 Traffic을 전달합니다.
이런 설정을 통해 아래와 같이 Pod가 종료되어도 Service는 새로운 Pod로 Traffic을 전달하기 때문에 서비스 연속성이 이어집니다.
물론 이 과정에서는 Graceful Shutdown과 같은 세부 설정은 필요하지만 Flow는 크게 다르지 않습니다.
Cluster IP
- K8S 클러스터 내부에서만 통신이 가능한 Internal network 가상 IP 할당
- service - pod 간 통신은 kube-proxy가 담당
- 서비스 디버깅이나 테스트 시 사용
- 백엔드 앱에서 많이 사용함

NodePort
- NAT를 이용해 클러스터 내 Node의 고정된 port를 갖는 IP로 service 노출
- 외부 트래픽을 서비스에 전달하는 가장 기본적인 방법
- 클러스터 외부에서 접근은: <nodeIP>:<nodePort>
- Port range : 30000-32767
- node port를 지정할 수 있지만 지정하지 않으면 range 내에서 random 할
- Client가 접근 시 포트를 붙여야 하는 불편함이 있어 일반적으로 사용하지 않고, API등 코드에서 Exact한 서비스에 주로 사용함

LoadBalancer
- NAT 및 클라우드 로드밸런서를 활용하여 서비스 노출
- 외부 트래픽을 자동으로 클러스터 내 서비스로 전달
- 클라우드 벤더(AWS, GCP, Azure)의 L4 Load Balancer(네트워크 LB)를 자동으로 생성
- 클러스터 외부에서 접근 시: <LoadBalancer IP>:<port> 또는 <DNS>:<port>
- NodePort를 자동으로 할당하여 내부에서 서비스로 라우팅
- 클라우드 벤더가 Load Balancer의 DNS를 제공하여 도메인 기반 접근 가능

Service (Loadbalancer Controller)
- AWS Load Balancer Controller + NLB IP 모드 동작 with AWS VPC CNI
AWS Load Balancer Controller는 EKS에서 LB를 자동으로 관리하는 컨트롤러입니다.
주요 기능:
Ingress를 ALB로 자동 프로비저닝
Service type: LoadBalancer를 NLB로 자동 프로비저닝
TargetGroupBinding CRD를 통한 직접적인 ALB/NLB 타겟 그룹 관리
[ 인스턴스 유형 : 노드에 NodePort로 전달 ]

[ IP 유형 > 반드시 AWS LoadBalancer 컨트롤러 파드 및 정책이 필요


AWS LoadBalancer Controller 배포
## 설치 전 CRD 확인
$ k get crd
NAME CREATED AT
cninodes.vpcresources.k8s.aws 2025-02-15T04:18:00Z
eniconfigs.crd.k8s.amazonaws.com 2025-02-15T04:20:38Z
policyendpoints.networking.k8s.aws 2025-02-15T04:18:01Z
securitygrouppolicies.vpcresources.k8s.aws 2025-02-15T04:18:00Z
## Helm Chart 설치
$ helm repo add eks https://aws.github.io/eks-charts
$ helm repo update
$ helm install aws-load-balancer-controller eks/aws-load-balancer-controller -n kube-system --set clusterName=$CLUSTER_NAME
## 설치 확인
$ k get crd
NAME CREATED AT
cninodes.vpcresources.k8s.aws 2025-02-15T04:18:00Z
eniconfigs.crd.k8s.amazonaws.com 2025-02-15T04:20:38Z
ingressclassparams.elbv2.k8s.aws 2025-02-15T18:19:42Z
policyendpoints.networking.k8s.aws 2025-02-15T04:18:01Z
securitygrouppolicies.vpcresources.k8s.aws 2025-02-15T04:18:00Z
targetgroupbindings.elbv2.k8s.aws 2025-02-15T18:19:42Z
## 컨트롤러 상세 확인
$ k describe deploy -n kube-system aws-load-balancer-controller
$ k describe deploy -n kube-system aws-load-balancer-controller | grep 'Service Account'
## 클러스터, 롤 확인
$ k describe clusterrolebindings.rbac.authorization.k8s.io aws-load-balancer-controller-rolebinding
$ k describe clusterroles.rbac.authorization.k8s.io aws-load-balancer-controller-role
PolicyRule:
Resources Non-Resource URLs Resource Names Verbs
--------- ----------------- -------------- -----
targetgroupbindings.elbv2.k8s.aws [] [] [create delete get list patch update watch]
events [] [] [create patch]
configmaps [] [] [get delete create update]
ingresses [] [] [get list patch update watch]
services [] [] [get list patch update watch]
ingresses.extensions [] [] [get list patch update watch]
services.extensions [] [] [get list patch update watch]
ingresses.networking.k8s.io [] [] [get list patch update watch]
services.networking.k8s.io [] [] [get list patch update watch]
endpoints [] [] [get list watch]
namespaces [] [] [get list watch]
nodes [] [] [get list watch]
pods [] [] [get list watch]
endpointslices.discovery.k8s.io [] [] [get list watch]
ingressclassparams.elbv2.k8s.aws [] [] [get list watch]
ingressclasses.networking.k8s.io [] [] [get list watch]
ingresses/status [] [] [update patch]
pods/status [] [] [update patch]
services/status [] [] [update patch]
targetgroupbindings/status [] [] [update patch]
ingresses.elbv2.k8s.aws/status [] [] [update patch]
pods.elbv2.k8s.aws/status [] [] [update patch]
services.elbv2.k8s.aws/status [] [] [update patch]
targetgroupbindings.elbv2.k8s.aws/status [] [] [update patch]
ingresses.extensions/status [] [] [update patch]
pods.extensions/status [] [] [update patch]
services.extensions/status [] [] [update patch]
targetgroupbindings.extensions/status [] [] [update patch]
ingresses.networking.k8s.io/status [] [] [update patch]
pods.networking.k8s.io/status [] [] [update patch]
services.networking.k8s.io/status [] [] [update patch]
targetgroupbindings.networking.k8s.io/status [] [] [update patch]
서비스/파드 배포 테스트 with NLB
## 모니터링
$ k get svc,pod,ep,endpointslices
## Deploy 배포
cat << EOF > echo-service-nlb.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: deploy-echo
spec:
replicas: 2
selector:
matchLabels:
app: deploy-websrv
template:
metadata:
labels:
app: deploy-websrv
spec:
terminationGracePeriodSeconds: 0
containers:
- name: aews-websrv
image: k8s.gcr.io/echoserver:1.5
ports:
- containerPort: 8080
---
apiVersion: v1
kind: Service
metadata:
name: svc-nlb-ip-type
annotations:
service.beta.kubernetes.io/aws-load-balancer-nlb-target-type: ip
service.beta.kubernetes.io/aws-load-balancer-scheme: internet-facing
service.beta.kubernetes.io/aws-load-balancer-healthcheck-port: "8080"
service.beta.kubernetes.io/aws-load-balancer-cross-zone-load-balancing-enabled: "true"
spec:
ports:
- port: 80
targetPort: 8080
protocol: TCP
type: LoadBalancer
loadBalancerClass: service.k8s.aws/nlb
selector:
app: deploy-websrv
EOF
$ k apply -f echo-service-nlb.yaml
NAME READY STATUS RESTARTS AGE
pod/deploy-echo-bf9bdb8bc-g9vnh 1/1 Running 0 16s
pod/deploy-echo-bf9bdb8bc-xn4zt 1/1 Running 0 16s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S
) AGE
service/kubernetes ClusterIP 10.100.0.1 <none> 443/TC
P 14h
service/svc-nlb-ip-type LoadBalancer 10.100.55.220 k8s-default-svcnlbip-35426ad505-63cf188be6cffd3a.elb.ap-northeast-2.amazonaws.com 80:308
11/TCP 16s
NAME ENDPOINTS AGE
endpoints/kubernetes 192.168.2.29:443,192.168.3.56:443 14h
endpoints/svc-nlb-ip-type 192.168.1.214:8080,192.168.2.150:8080 16s
생성 확인
$ aws elbv2 describe-load-balancers --query 'LoadBalancers[*].State.Code' --output text
$ k get targetgroupbindings
NAME SERVICE-NAME SERVICE-PORT TARGET-TYPE AGE
targetgroupbinding.elbv2.k8s.aws/k8s-default-svcnlbip-3316bbc1f0 svc-nlb-ip-type 80 ip 2m25s
$ k get targetgroupbindings -o json | jq
"finalizers": [
"elbv2.k8s.aws/resources"
],
"generation": 1,
"labels": {
"service.k8s.aws/stack-name": "svc-nlb-ip-type",
"service.k8s.aws/stack-namespace": "default"
},
"name": "k8s-default-svcnlbip-3316bbc1f0",
"namespace": "default",
"resourceVersion": "172045",
"uid": "66625816-88d1-4cf4-a16a-cac6c6aea2a1"
},
"spec": {
"ipAddressType": "ipv4",
"networking": {
"ingress": [
{
"from": [
{
"securityGroup": {
"groupID": "sg-0dd52ebfc60d7b3ac"
}
}
],
"ports": [
{
"port": 8080,
"protocol": "TCP"
}
]
}
]
},
"serviceRef": {
"name": "svc-nlb-ip-type",
"port": 80

## 웹 주소 확인
$ kubectl get svc svc-nlb-ip-type -o jsonpath='{.status.loadBalancer.ingress[0].hostname}' | awk '{ print "Pod Web URL = http://"$1 }'
Pod Web URL = http://k8s-default-svcnlbip-35426ad505-63cf188be6cffd3a.elb.ap-northeast-2.amazonaws.com

## Pod 로그 확인
$ k stern -l app=deploy-websrv
+ deploy-echo-bf9bdb8bc-g9vnh › aews-websrv
+ deploy-echo-bf9bdb8bc-xn4zt › aews-websrv
deploy-echo-bf9bdb8bc-xn4zt aews-websrv 192.168.3.30 - - [15/Feb/2025:18:30:58 +0000] "GET / HTTP/1.1" 200 1088 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/132.0.0.0 Safari/537.36"
deploy-echo-bf9bdb8bc-xn4zt aews-websrv 192.168.3.30 - - [15/Feb/2025:18:30:58 +0000] "GET /favicon.ico HTTP/1.1" 200 1110 "http://k8s-default-svcnlbip-35426ad505-63cf188be6cffd3a.elb.ap-northeast-2.amazonaws.com/" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/132.0.0.0 Safari/537.36"
deploy-echo-bf9bdb8bc-xn4zt aews-websrv 192.168.3.30 - - [15/Feb/2025:18:32:03 +0000] "GET / HTTP/1.1" 200 1119 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/132.0.0.0 Safari/537.36"
deploy-echo-bf9bdb8bc-xn4zt aews-websrv 192.168.3.30 - - [15/Feb/2025:18:32:03 +0000] "GET /favicon.ico HTTP/1.1" 200 1110 "http://k8s-default-svcnlbip-35426ad505-63cf188be6cffd3a.elb.ap-northeast-2.amazonaws.com/" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/132.0.0.0 Safari/537.36"
deploy-echo-bf9bdb8bc-xn4zt aews-websrv 192.168.3.30 - - [15/Feb/2025:18:32:03 +0000] "GET / HTTP/1.1" 200 1119 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/132.0.0.0 Safari/537.36"
deploy-echo-bf9bdb8bc-xn4zt aews-websrv 192.168.3.30 - - [15/Feb/2025:18:32:03 +0000] "GET /favicon.ico HTTP/1.1" 200 1110 "http://k8s-default-svcnlbip-35426ad505-63cf188be6cffd3a.elb.ap-northeast-2.amazonaws.com/" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/132.0.0.0 Safari/537.36"
deploy-echo-bf9bdb8bc-g9vnh aews-websrv 192.168.2.85 - - [15/Feb/2025:18:32:03 +0000] "GET / HTTP/1.1" 200 1119 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/132.0.0.0 Safari/537.36"
deploy-echo-bf9bdb8bc-g9vnh aews-websrv 192.168.2.85 - - [15/Feb/2025:18:32:04 +0000] "GET /favicon.ico HTTP/1.1" 200 1110 "http://k8s-default-svcnlbip-35426ad505-63cf188be6cffd3a.elb.ap-northeast-2.amazonaws.com/" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/132.0.0.0 Safari/537.36"
분산 접속 확인
$ NLB=$(kubectl get svc svc-nlb-ip-type -o jsonpath='{.status.loadBalancer.ingress[0].hostname}')
$ curl -s $NLB
Hostname: deploy-echo-bf9bdb8bc-g9vnh
Pod Information:
-no pod information available-
Server values:
server_version=nginx: 1.13.0 - lua: 10008
Request Information:
client_address=192.168.2.85
method=GET
real path=/
query=
request_version=1.1
request_uri=http://k8s-default-svcnlbip-35426ad505-63cf188be6cffd3a.elb.ap-northeast-2.amazonaws.com:8080/
Request Headers:
accept=*/*
host=k8s-default-svcnlbip-35426ad505-63cf188be6cffd3a.elb.ap-northeast-2.amazonaws.com
user-agent=curl/8.7.1
Request Body:
-no body in request-
## 100번 접속하여 분산 상태 확인
$ for i in {1..100}; do curl -s $NLB | grep Hostname ; done | sort | uniq -c | sort -nr
52 Hostname: deploy-echo-bf9bdb8bc-g9vnh
48 Hostname: deploy-echo-bf9bdb8bc-xn4zt
## 반복문을 통해서도 확인 가능
$ while true; do curl -s --connect-timeout 1 $NLB | egrep 'Hostname|client_address'; echo "----------" ; date "+%Y-%m-%d %H:%M:%S" ; sleep 1; done
----------
2025-02-16 03:34:12
Hostname: deploy-echo-bf9bdb8bc-xn4zt
client_address=192.168.3.30
----------
2025-02-16 03:34:13
Hostname: deploy-echo-bf9bdb8bc-xn4zt
client_address=192.168.3.30
----------
2025-02-16 03:34:14
Hostname: deploy-echo-bf9bdb8bc-g9vnh
client_address=192.168.3.30
----------
2025-02-16 03:34:15
Hostname: deploy-echo-bf9bdb8bc-xn4zt
client_address=192.168.3.30
----------
2025-02-16 03:34:17
Hostname: deploy-echo-bf9bdb8bc-xn4zt
client_address=192.168.3.30
----------
2025-02-16 03:34:18
Hostname: deploy-echo-bf9bdb8bc-g9vnh
client_address=192.168.3.30
Ingress
인그레스는 Proxy 기반으로 클러스터 내부의 서비스를 외부로 노출합니다.
ingress.yaml 배포
apiVersion: v1
kind: Namespace
metadata:
name: game-2048
---
apiVersion: apps/v1
kind: Deployment
metadata:
namespace: game-2048
name: deployment-2048
spec:
selector:
matchLabels:
app.kubernetes.io/name: app-2048
replicas: 2
template:
metadata:
labels:
app.kubernetes.io/name: app-2048
spec:
containers:
- image: public.ecr.aws/l6m2t8p7/docker-2048:latest
imagePullPolicy: Always
name: app-2048
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
namespace: game-2048
name: service-2048
spec:
ports:
- port: 80
targetPort: 80
protocol: TCP
type: NodePort
selector:
app.kubernetes.io/name: app-2048
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
namespace: game-2048
name: ingress-2048
annotations:
alb.ingress.kubernetes.io/scheme: internet-facing
alb.ingress.kubernetes.io/target-type: ip
spec:
ingressClassName: alb
rules:
- http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: service-2048
port:
number: 80
## 오브젝트 배포
$ k apply -f ingress1.yaml
## 자원 생성 모니터링
$ watch -d kubectl get pod,ingress,svc,ep,endpointslices -n game-2048
NAME READY STATUS RESTARTS AGE
pod/deployment-2048-7df5f9886b-5smz4 1/1 Running 0 15s
pod/deployment-2048-7df5f9886b-mb8fx 1/1 Running 0 15s
NAME CLASS HOSTS ADDRESS PORTS AGE
ingress.networking.k8s.io/ingress-2048 alb * k8s-game2048-ingress2-70d50ce3fd-450104156.ap-northeast-2.elb.amazonaws.com 80 16s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/service-2048 NodePort 10.100.58.47 <none> 80:30513/TCP 16s
NAME ENDPOINTS AGE
endpoints/service-2048 192.168.1.32:80,192.168.2.47:80 16s
NAME ADDRESSTYPE PORTS ENDPOINTS AGE
endpointslice.discovery.k8s.io/service-2048-8xzh6 IPv4 80 192.168.1.32,192.168.2.47 16s
## ALB 생성 확인
$ aws elbv2 describe-load-balancers --query 'LoadBalancers[?contains(LoadBalancerName, `k8s-game2048`) == `true`]' | jq
"DNSName": "k8s-game2048-ingress2-70d50ce3fd-450104156.ap-northeast-2.elb.amazonaws.com",
"CanonicalHostedZoneId": "ZWKZPGTI48KDX",
"CreatedTime": "2025-02-15T18:42:36.466000+00:00",
"LoadBalancerName": "k8s-game2048-ingress2-70d50ce3fd",
"Scheme": "internet-facing",
"VpcId": "vpc-0ece9237619f5bf51",
"State": {
"Code": "provisioning"
},
$ ALB_ARN=$(aws elbv2 describe-load-balancers --query 'LoadBalancers[?contains(LoadBalancerName, `k8s-game2048`) == `true`].LoadBalancerArn' | jq -r '.[0]')
$ aws elbv2 describe-target-groups --load-balancer-arn $ALB_ARN
$ TARGET_GROUP_ARN=$(aws elbv2 describe-target-groups --load-balancer-arn $ALB_ARN | jq -r '.TargetGroups[0].TargetGroupArn')
$ aws elbv2 describe-target-health --target-group-arn $TARGET_GROUP_ARN | jq
{
"TargetHealthDescriptions": [
{
"Target": {
"Id": "192.168.1.32",
"Port": 80,
"AvailabilityZone": "ap-northeast-2a"
},
"HealthCheckPort": "80",
"TargetHealth": {
"State": "initial",
"Reason": "Elb.RegistrationInProgress",
"Description": "Target registration is in progress"
}
},
{
"Target": {
"Id": "192.168.2.47",
"Port": 80,
"AvailabilityZone": "ap-northeast-2b"
},
"HealthCheckPort": "80",
"TargetHealth": {
"State": "initial",
"Reason": "Elb.RegistrationInProgress",
"Description": "Target registration is in progress"
}
}
]
}
게임 접속
$ k get ingress -n game-2048 ingress-2048 -o jsonpath='{.status.loadBalancer.ingress[0].hostname}' | awk '{ print "Game URL = http://"$1 }'
Game URL = http://k8s-game2048-ingress2-70d50ce3fd-450104156.ap-northeast-2.elb.amazonaws.com

AWS 콘솔의 리소스맵 확인
- ALB에서 POD IP로 직접 전달

$ k get pod -o wide -n game-2048
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
deployment-2048-7df5f9886b-5smz4 1/1 Running 0 5m34s 192.168.2.47 ip-192-168-2-38.ap-northeast-2.compute.internal <none> <none>
deployment-2048-7df5f9886b-mb8fx 1/1 Running 0 5m34s 192.168.1.32 ip-192-168-1-172.ap-northeast-2.compute.internal <none> <none>
- POD 개수 변경
## 모니터링
# [터미널 1]
$ watch kubectl get pod -n game-2048
# [터미널 2]
$ while true; do aws elbv2 describe-target-health --target-group-arn $TARGET_GROUP_ARN --output text; echo; done
## 파드 추가
$ k scale deploy -n game-2048 deployment-2048 --replicas 3
NAME READY STATUS RESTARTS AGE
deployment-2048-7df5f9886b-5smz4 1/1 Running 0 8m29s
deployment-2048-7df5f9886b-mb8fx 1/1 Running 0 8m29s
deployment-2048-7df5f9886b-mq9gm 1/1 Running 0 51s
TARGETHEALTHDESCRIPTIONS 80
TARGET ap-northeast-2a 192.168.1.32 80
TARGETHEALTH healthy
TARGETHEALTHDESCRIPTIONS 80
TARGET ap-northeast-2b 192.168.2.47 80
TARGETHEALTH healthy
TARGETHEALTHDESCRIPTIONS 80
TARGET ap-northeast-2c 192.168.3.71 80
TARGETHEALTH healthy
## 파드 감소
$ k scale deploy -n game-2048 deployment-2048 --replicas 1
NAME READY STATUS RESTARTS AGE
deployment-2048-7df5f9886b-5smz4 1/1 Running 0 9m25s

ExternalDNS
ExternalDNS는 서비스 / 인그레스 생성 시 도메인을 설정하면, AWS, Azure, GCP에 A레코드로 자동 생성 / 삭제됩니다.
MyDomain=ssungz.net
## 도메인 ID 조회 및 변수 지정
$ aws route53 list-hosted-zones-by-name --dns-name "${MyDomain}." | jq
$ aws route53 list-hosted-zones-by-name --dns-name "${MyDomain}." --query "HostedZones[0].Name"
$ aws route53 list-hosted-zones-by-name --dns-name "${MyDomain}." --query "HostedZones[0].Id" --output text
$ MyDnzHostedZoneId=`aws route53 list-hosted-zones-by-name --dns-name "${MyDomain}." --query "HostedZones[0].Id" --output text`
$ echo $MyDnzHostedZoneId
## A 레코드 조회
$ aws route53 list-resource-record-sets --output json --hosted-zone-id "${MyDnzHostedZoneId}" --query "ResourceRecordSets[?Type == 'A']"
ExternalDNS 설치
## externaldns 배포
$ curl -s -O https://raw.githubusercontent.com/gasida/PKOS/main/aews/externaldns.yaml
$ cat externaldns.yaml
$ MyDomain=$MyDomain MyDnzHostedZoneId=$MyDnzHostedZoneId envsubst < externaldns.yaml | kubectl apply -f -
## 설치 확인 및 모니터링
$ k get pod -l app.kubernetes.io/name=external-dns -n kube-system
NAME READY STATUS RESTARTS AGE
external-dns-7bbfd5c74d-g5klv 1/1 Running 0 17s
Service(NLB) + 도메인 연동(ExternalDNS)
## 모니터링
$ k stern -l app.kubernetes.io/name=external-dns -n kube-system
## 테트리스 게임 배포
cat <<EOF | kubectl apply -f -
apiVersion: apps/v1
kind: Deployment
metadata:
name: tetris
labels:
app: tetris
spec:
replicas: 1
selector:
matchLabels:
app: tetris
template:
metadata:
labels:
app: tetris
spec:
containers:
- name: tetris
image: bsord/tetris
---
apiVersion: v1
kind: Service
metadata:
name: tetris
annotations:
service.beta.kubernetes.io/aws-load-balancer-nlb-target-type: ip
service.beta.kubernetes.io/aws-load-balancer-scheme: internet-facing
service.beta.kubernetes.io/aws-load-balancer-cross-zone-load-balancing-enabled: "true"
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: "http"
#service.beta.kubernetes.io/aws-load-balancer-healthcheck-port: "80"
spec:
selector:
app: tetris
ports:
- port: 80
protocol: TCP
targetPort: 80
type: LoadBalancer
loadBalancerClass: service.k8s.aws/nlb
EOF
## 배포 확인
$ k get deploy,svc,ep tetris
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/tetris 0/1 1 0 5s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/tetris LoadBalancer 10.100.210.179 k8s-default-tetris-3858665339-d4b48c95915a074c.elb.ap-northeast-2.amazonaws.com 80:31604/TCP 5s
NAME ENDPOINTS AGE
endpoints/tetris <none> 5s
## NLB에 ExternalDNS 도메인 연결
$ k annotate service tetris "external-dns.alpha.kubernetes.io/hostname=tetris.$MyDomain"
$ k describe svc/tetris
Name: tetris
Namespace: default
Labels: <none>
Annotations: external-dns.alpha.kubernetes.io/hostname: tetris.ssungz.net
## Route53에 A레코드 확인
$ aws route53 list-resource-record-sets --hosted-zone-id "${MyDnzHostedZoneId}" --query "ResourceRecordSets[?Type == 'A']" | jq
"ResourceRecords": [
{
"Value": "175.106.96.101"
}
]
},
{
"Name": "tetris.ssungz.net.",
"Type": "A",
"AliasTarget": {
"HostedZoneId": "ZIBE1TIR4HY56",
## dig로 도메인 확인
$ dig +short tetris.$MyDomain
15.164.202.39
## 도메인 체크
$ echo -e "My Domain Checker Site1 = https://www.whatsmydns.net/#A/tetris.$MyDomain"
$ echo -e "My Domain Checker Site2 = https://dnschecker.org/#A/tetris.$MyDomain"
사이트 접속 확인

Topology Aware Routing
Kubernetes에서 네트워크 트래픽을 클러스터 내에서 최적의 경로로 라우팅하며, Latency 및 Cost 절감을 위해 만들어짐
https://kubernetes.io/docs/concepts/services-networking/topology-aware-routing/
Topology Aware Routing
_Topology Aware Routing_ provides a mechanism to help keep network traffic within the zone where it originated. Preferring same-zone traffic between Pods in your cluster can help with reliability, performance (network latency and throughput), or cost.
kubernetes.io
## 각 노드의 배포 AZ 확인
$ k get node --label-columns=topology.kubernetes.io/zone
NAME STATUS ROLES AGE VERSION ZONE
ip-192-168-1-172.ap-northeast-2.compute.internal Ready <none> 15h v1.31.5-eks-5d632ec ap-northeast-2a
ip-192-168-2-38.ap-northeast-2.compute.internal Ready <none> 15h v1.31.5-eks-5d632ec ap-northeast-2b
ip-192-168-3-98.ap-northeast-2.compute.internal Ready <none> 15h v1.31.5-eks-5d632ec ap-northeast-2c
## 테스트 리소스 배포
cat <<EOF | kubectl apply -f -
apiVersion: apps/v1
kind: Deployment
metadata:
name: deploy-echo
spec:
replicas: 3
selector:
matchLabels:
app: deploy-websrv
template:
metadata:
labels:
app: deploy-websrv
spec:
terminationGracePeriodSeconds: 0
containers:
- name: websrv
image: registry.k8s.io/echoserver:1.5
ports:
- containerPort: 8080
---
apiVersion: v1
kind: Service
metadata:
name: svc-clusterip
spec:
ports:
- name: svc-webport
port: 80
targetPort: 8080
selector:
app: deploy-websrv
type: ClusterIP
EOF
## 배포 자원 확인
$ k get deploy,svc,ep,endpointslices
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/deploy-echo 3/3 3 3 15s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kubernetes ClusterIP 10.100.0.1 <none> 443/TCP 15h
service/svc-clusterip ClusterIP 10.100.141.22 <none> 80/TCP 15s
NAME ENDPOINTS AGE
endpoints/kubernetes 192.168.2.29:443,192.168.3.56:443 15h
endpoints/svc-clusterip 192.168.1.71:8080,192.168.2.150:8080,192.168.3.82:8080 15s
NAME ADDRESSTYPE PORTS ENDPOINTS AGE
endpointslice.discovery.k8s.io/kubernetes IPv4 443 192.168.2.29,192.168.3.56 15h
endpointslice.discovery.k8s.io/svc-clusterip-c4tbk IPv4 8080 192.168.2.150,192.168.1.71,192.168.3.82 15s
## 라벨 자원 조회
$ k get endpointslices -l kubernetes.io/service-name=svc-clusterip
NAME ADDRESSTYPE PORTS ENDPOINTS AGE
svc-clusterip-c4tbk IPv4 8080 192.168.2.150,192.168.1.71,192.168.3.82 46s
## 접속 테스트를 수행할 클라이언트 파드 배포
cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: Pod
metadata:
name: netshoot-pod
spec:
containers:
- name: netshoot-pod
image: nicolaka/netshoot
command: ["tail"]
args: ["-f", "/dev/null"]
terminationGracePeriodSeconds: 0
EOF
## 리소스 확인
$ k get pod -owide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
deploy-echo-75b7b9558c-kkr5n 1/1 Running 0 113s 192.168.3.82 ip-192-168-3-98.ap-northeast-2.compute.internal <none> <none>
deploy-echo-75b7b9558c-rgg8l 1/1 Running 0 113s 192.168.1.71 ip-192-168-1-172.ap-northeast-2.compute.internal <none> <none>
deploy-echo-75b7b9558c-z8qrb 1/1 Running 0 113s 192.168.2.150 ip-192-168-2-38.ap-northeast-2.compute.internal <none> <none>
netshoot-pod 1/1 Running 0 6s 192.168.2.103 ip-192-168-2-38.ap-northeast-2.compute.internal <none> <none>
- netshoot pod에서 clusterip 접속 시 부하분산 확인
## deploy 파드의 노드 확인
$ k get pod -l app=deploy=websrv -o wide
## 테스트 파드에서 clusterip 접속 시 부하분산 확인
$ k exec -it netshoot-pod -- curl svc-clusterip | grep Hostname

## 100번 접속 시 분산 확인
$ k exec -it netshoot-pod -- zsh -c "for i in {1..100}; do curl -s svc-clusterip | grep Hostname; done | sort | uniq -c | sort -nr"
38 Hostname: deploy-echo-75b7b9558c-rgg8l
36 Hostname: deploy-echo-75b7b9558c-kkr5n
26 Hostname: deploy-echo-75b7b9558c-z8qrb
## iptables 확인
$ ssh ec2-user@$N1 sudo iptables -t nat -nvL
Chain KUBE-SVC-KBDEBIL6IU6WL7RF (1 references)
pkts bytes target prot opt in out source destination
0 0 KUBE-SEP-JXFW2Q4VP526YNPC all -- * * 0.0.0.0/0 0.0.0.0/0 /* default/svc-clusterip:svc-webport -> 192.168.1.71:8080 */ statistic mode random probability 0.33333333349
0 0 KUBE-SEP-ZAJPLYNAVVFPWMJH all -- * * 0.0.0.0/0 0.0.0.0/0 /* default/svc-clusterip:svc-webport -> 192.168.2.150:8080 */ statistic mode random probability 0.50000000000
0 0 KUBE-SEP-SVDPZWNEMP7LHDXJ all -- * * 0.0.0.0/0 0.0.0.0/0 /* default/svc-clusterip:svc-webport -> 192.168.3.82:8080 */
위에 iptables 정책에 의해 random으로 3개 파드로 부사 분산이 되는 것을 확인할 수 있습니다.
랜덤 확률이 코멘트로 작성되어 있긴 하지만 실제 100번 정도 접속했을 때는 코멘트와 달리 나름 고루 분배되는 것을 확인했습니다.
Target에 대한 정책을 확인해보면 각각 노드에 속한 파드의 IP가 출발지로 설정되어 있는 것을 확인 할 수 있습니다.
$ ssh ec2-user@$N1 sudo iptables -v --numeric --table nat --list KUBE-SEP-JXFW2Q4VP526YNPC
Chain KUBE-SEP-JXFW2Q4VP526YNPC (1 references)
pkts bytes target prot opt in out source destination
0 0 KUBE-MARK-MASQ all -- * * 192.168.1.71 0.0.0.0/0 /* default/svc-clusterip:svc-webport */
0 0 DNAT tcp -- * * 0.0.0.0/0 0.0.0.0/0 /* default/svc-clusterip:svc-webport */ tcp to:192.168.1.71:8080
## 파드 IP
$ k get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
deploy-echo-75b7b9558c-kkr5n 1/1 Running 0 17m 192.168.3.82 ip-192-168-3-98.ap-northeast-2.compute.internal <none> <none>
deploy-echo-75b7b9558c-rgg8l 1/1 Running 0 17m 192.168.1.71 ip-192-168-1-172.ap-northeast-2.compute.internal <none> <none>
deploy-echo-75b7b9558c-z8qrb 1/1 Running 0 17m 192.168.2.150 ip-192-168-2-38.ap-northeast-2.compute.internal <none> <none>
netshoot-pod 1/1 Running 0 15m 192.168.2.103 ip-192-168-2-38.ap-northeast-2.compute.internal <none> <none>
Topology Mode ( Awage Hint )

## Annotate가 없을 경우에는 hints 라는 블록이 없습니다. (conditions 블록 다음에 생김)
$ k get endpointslices -l kubernetes.io/service-name=svc-clusterip -o yaml
conditions:
ready: true
serving: true
terminating: false
nodeName: ip-192-168-3-98.ap-northeast-2.compute.internal
## Aware Routing 추가를 위해 annotate 추가
$ k annotate service svc-clusterip "service.kubernetes.io/topology-mode=auto"
conditions:
ready: true
serving: true
terminating: false
hints:
forZones:
- name: ap-northeast-2c
nodeName: ip-192-168-3-98.ap-northeast-2.compute.internal
## 각 파드 데이터 하위에 AZ 존 표기가 추가 됩니다.
$ k get endpointslices -l kubernetes.io/service-name=svc-clusterip -o yaml | grep -A1 Zone
forZones:
- name: ap-northeast-2b
--
forZones:
- name: ap-northeast-2a
--
forZones:
- name: ap-northeast-2c
## 100번 접속 해보기
$ k exec -it netshoot-pod -- zsh -c "for i in {1..100}; do curl -s svc-clusterip | grep Hostname; done | sort | uniq -c | sort -nr"
100 Hostname: deploy-echo-75b7b9558c-z8qrb
iptables를 조회하면 이전에는 3개의 ip가 모두 출력됐던 것과 달리 같은 AZ에 있는 pod만 출력되는 것을 확인할 수 있습니다.
Chain KUBE-SVC-KBDEBIL6IU6WL7RF (1 references)
pkts bytes target prot opt in out source destination
0 0 KUBE-SEP-SVDPZWNEMP7LHDXJ all -- * * 0.0.0.0/0 0.0.0.0/0 /* default/svc-clusterip:svc-webport -> 192.168.3.82:8080 */
POD를 한개로 줄이고 node가 다른 pod에서 접속이 실패할 것 같았는데 실제 테스트 시 정상적으로 접속되는 것을 확인했습니다.
POD가 1개일 때 iptables의 정책은 현재 생성되어 있는 1개 pod를 목적지로 수정되어, AZ가 달라도 통신이 되는 것을 확인할 수 있습니다.
$ k get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
deploy-echo-75b7b9558c-kkr5n 1/1 Running 0 33m 192.168.3.82 ip-192-168-3-98.ap-northeast-2.compute.internal <none> <none>
netshoot-pod 1/1 Running 0 31m 192.168.2.103 ip-192-168-2-38.ap-northeast-2.compute.internal <none> <none>
$ k exec -it netshoot-pod -- zsh -c "for i in {1..100}; do curl -s svc-clusterip | grep Hostname; done | sort | uniq -c | sort -nr"
100 Hostname: deploy-echo-75b7b9558c-kkr5n
## IPTABLES 확인
# [노드 1]
Chain KUBE-SVC-KBDEBIL6IU6WL7RF (1 references)
pkts bytes target prot opt in out source destination
0 0 KUBE-SEP-SVDPZWNEMP7LHDXJ all -- * * 0.0.0.0/0 0.0.0.0/0 /* default/svc-clusterip:svc-webport -> 192.168.3.82:8080 */
# [노드 2]
Chain KUBE-SVC-KBDEBIL6IU6WL7RF (1 references)
pkts bytes target prot opt in out source destination
0 0 KUBE-SEP-SVDPZWNEMP7LHDXJ all -- * * 0.0.0.0/0 0.0.0.0/0 /* default/svc-clusterip:svc-webport -> 192.168.3.82:8080 */
# [노드 3]
Chain KUBE-SVC-KBDEBIL6IU6WL7RF (1 references)
pkts bytes target prot opt in out source destination
100 6000 KUBE-SEP-SVDPZWNEMP7LHDXJ all -- * * 0.0.0.0/0 0.0.0.0/0 /* default/svc-clusterip:svc-webport -> 192.168.3.82:8080 */
이전에 설정했던 hint도 조회 시 삭제되어 있는 것을 확인할 수 있었습니다.
$ k get endpointslices -l kubernetes.io/service-name=svc-clusterip -o yaml | grep -A1 Zone
'Cloud > AWS' 카테고리의 다른 글
[AWS] EKS Observability (1) | 2025.03.02 |
---|---|
[AWS] EKS Storage (0) | 2025.02.23 |
[AWS] EKS Networkings (1) (0) | 2025.02.15 |
[AWS] EKS 설치 및 기본 사용 (3) (0) | 2025.02.08 |
[AWS] EKS 설치 및 기본 사용 (2) (0) | 2025.02.07 |