istio 9주차 - ambient mesh
Istio Ambient Mesh는 사이드카 프록시 없이 인프라에 통합된 데이터플레인 방식을 제공한다. 기존 Istio는 각 파드마다 사이드카 프록시를 배치하여 트래픽을 제어했다.

Ambient Mesh에서는 노드 단위로 공유되는 ztunnel 파드를 통해 파드 간 안전한 통신을 보장한다. ztunnel은 L4(전송 계층) 보안, 인증, 원격 측정, L4 정책을 제공한다. L7(애플리케이션 계층) 트래픽 제어나 정책이 필요할 때는 Waypoint Proxy 파드를 통해 처리한다.

Waypoint Proxy는 특정 네임스페이스 또는 서비스 계정 단위로 배포할 수 있다.

HBONE(HTTP Based Overlay Network Encapsulation) 프로토콜을 활용해 HTTP CONNECT 방식으로 트래픽을 터널링한다. HTTP/2와 mTLS를 사용하여 암호화 및 상호 인증을 제공한다.

ztunnel은 Istiod에서 xDS Config를 받아 네트워크 구성을 동적으로 적용한다.

ztunnel과 Waypoint Proxy는 서로 역할이 분리되어 보안성과 확장성을 높인다.

Ambient Mesh는 파드 스펙 변경이나 재시작 없이 메시 기능을 적용할 수 있다. 리소스 오버헤드가 기존 사이드카 방식보다 적고, 클러스터 자원 활용 효율이 높아진다. 또한 보안 측면에서 ztunnel은 노드 단위 키만 접근하므로 공격 범위가 제한된다.
기존 사이드카 방식과 Ambient Mesh 방식을 혼용해 사용할 수 있다. Ambient Mesh는 운영 단순화, 비용 절감, 확장성 개선 등 다양한 이점을 제공한다.
Istio의 사이드카 방식은 애플리케이션 파드의 사양을 수정하고 트래픽을 리디렉션해야 하므로, 사이드카 설치나 업그레이드 시 파드를 재시작해야 하는 부담이 있다. 각 파드별로 프록시 리소스를 최악의 상황에 맞춰 할당해야 하므로 클러스터 전체의 리소스 활용도가 저하된다. 사이드카가 트래픽 캡처와 HTTP 처리를 담당하면서 일부 비표준 HTTP 구현 애플리케이션에서는 트래픽이 차단되는 문제가 발생한다. 이러한 제약으로 인해 덜 침입적이고 사용하기 쉬운 서비스 메시 옵션의 필요성이 대두된다.
실습 환경 구성
```bash
#
kind create cluster --name **myk8s** --image kindest/node:**v1.32.2** --config - <<EOF
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
- role: **control-plane**
extraPortMappings:
- containerPort: 30000 # Sample Application
hostPort: 30000
- containerPort: 30001 # Prometheus
hostPort: 30001
- containerPort: 30002 # Grafana
hostPort: 30002
- containerPort: 30003 # Kiali
hostPort: 30003
- containerPort: 30004 # Tracing
hostPort: 30004
- containerPort: 30005 # kube-ops-view
hostPort: 30005
- role: **worker**
- role: **worker**
networking:
podSubnet: 10.10.0.0/16
serviceSubnet: 10.200.1.0/24
EOF
# 설치 확인
docker ps
# 노드에 기본 툴 설치
for node in control-plane worker worker2; do echo "node : myk8s-$node" ; docker exec -it myk8s-$node sh -c 'apt update && apt install tree psmisc lsof ipset wget bridge-utils net-tools dnsutils tcpdump ngrep iputils-ping git vim -y'; echo; done
~~for node in control-plane worker worker2; do echo "node : myk8s-$node" ; docker exec -it myk8s-$node sh -c 'DEBIAN_FRONTEND=noninteractive **apt install termshark -y**'; echo; done~~
# (옵션) kube-ops-view
helm repo add geek-cookbook https://geek-cookbook.github.io/charts/
helm install kube-ops-view geek-cookbook/kube-ops-view --version 1.2.2 --set service.main.type=**NodePort**,service.main.ports.http.nodePort=**30005** --set env.TZ="Asia/Seoul" --namespace kube-system
kubectl get deploy,pod,svc,ep -n kube-system -l app.kubernetes.io/instance=kube-ops-view
## kube-ops-view 접속 URL 확인
open "http://127.0.0.1:**30005**/#scale=1.5"
open "http://127.0.0.1:30005/#scale=1.3"
```
# kind 설치 시 kind 이름의 도커 브리지가 생성된다 : 172.18.0.0/16 대역
docker network ls
docker inspect kind
# '테스트용 PC(mypc)' 컨테이너 기동 : kind 도커 브리지를 사용하고, 컨테이너 IP를 지정 혹은 지정 없이 배포
docker run -d --rm --name mypc --network kind --ip 172.18.0.100 nicolaka/netshoot sleep infinity # IP 지정 실행 시
혹은 IP 지정 실행 시 에러 발생 시 아래 처럼 IP 지정 없이 실행
docker run -d --rm --name mypc --network kind nicolaka/netshoot sleep infinity # IP 지정 없이 실행 시
docker ps
# '웹서버(myweb1, myweb2)' 컨테이너 기동 : kind 도커 브리지를 사용
# https://hub.docker.com/r/hashicorp/http-echo
docker run -d --rm --name myweb1 --network kind --ip 172.18.0.101 hashicorp/http-echo -listen=:80 -text="myweb1 server"
docker run -d --rm --name myweb2 --network kind --ip 172.18.0.102 hashicorp/http-echo -listen=:80 -text="myweb2 server"
혹은 IP 지정 실행 시 에러 발생 시 아래 처럼 IP 지정 없이 실행
docker run -d --rm --name myweb1 --network kind hashicorp/http-echo -listen=:80 -text="myweb1 server"
docker run -d --rm --name myweb2 --network kind hashicorp/http-echo -listen=:80 -text="myweb2 server"
docker ps
# kind network 중 컨테이너(노드) IP(대역) 확인
docker ps -q | xargs docker inspect --format '{{.Name}} {{.NetworkSettings.Networks.kind.IPAddress}}'
/myweb2 172.18.0.102
/myweb1 172.18.0.101
/mypc 172.18.0.100
/myk8s-control-plane 172.18.0.2
# 동일한 docker network(kind) 내부에서 컨테이너 이름 기반 도메인 통신 가능 확인!
docker exec -it mypc curl myweb1
docker exec -it mypc curl myweb2
docker exec -it mypc curl 172.18.0.101
docker exec -it mypc curl 172.18.0.102
# MetalLB 배포
kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.14.9/config/manifests/metallb-native.yaml
# 확인
kubectl get crd
kubectl get pod -n metallb-system
# IPAddressPool, L2Advertisement 설정
cat << EOF | kubectl apply -f -
apiVersion: metallb.io/v1beta1
kind: IPAddressPool
metadata:
name: default
namespace: metallb-system
spec:
addresses:
- 172.18.255.201-172.18.255.220
---
apiVersion: metallb.io/v1beta1
kind: L2Advertisement
metadata:
name: default
namespace: metallb-system
spec:
ipAddressPools:
- default
EOF
# 확인
kubectl get IPAddressPool,L2Advertisement -A
# myk8s-control-plane 진입 후 설치 진행
docker exec -it myk8s-control-plane bash
-----------------------------------
# istioctl 설치
export ISTIOV=1.26.0
echo 'export ISTIOV=1.26.0' >> /root/.bashrc
curl -s -L https://istio.io/downloadIstio | ISTIO_VERSION=$ISTIOV sh -
cp istio-$ISTIOV/bin/istioctl /usr/local/bin/istioctl
istioctl version --remote=false
client version: 1.26.0
# ambient 프로파일 컨트롤 플레인 배포
istioctl install --set profile=ambient --set meshConfig.accessLogFile=/dev/stdout --skip-confirmation
istioctl install --set profile=ambient --set meshConfig.enableTracing=true -y
# Install the Kubernetes Gateway API CRDs
kubectl get crd gateways.gateway.networking.k8s.io &> /dev/null || \
kubectl apply -f https://github.com/kubernetes-sigs/gateway-api/releases/download/v1.3.0/standard-install.yaml
# 보조 도구 설치
kubectl apply -f istio-$ISTIOV/samples/addons
kubectl apply -f istio-$ISTIOV/samples/addons # nodePort 충돌 시 한번 더 입력
# 빠져나오기
exit
-----------------------------------
# 설치 확인 : istiod, istio-ingressgateway, crd 등
kubectl get all,svc,ep,sa,cm,secret,pdb -n istio-system
kubectl get crd | grep istio.io
kubectl get crd | grep -v istio | grep -v metallb
kubectl get crd | grep gateways
gateways.gateway.networking.k8s.io 2025-06-01T04:54:23Z
gateways.networking.istio.io 2025-06-01T04:53:51Z
kubectl api-resources | grep Gateway
gatewayclasses gc gateway.networking.k8s.io/v1 false GatewayClass
gateways gtw gateway.networking.k8s.io/v1 true Gateway
gateways gw networking.istio.io/v1 true Gateway
kubectl describe cm -n istio-system istio
...
Data
====
mesh:
----
accessLogFile: /dev/stdout
defaultConfig:
discoveryAddress: istiod.istio-system.svc:15012
defaultProviders:
metrics:
- prometheus
enablePrometheusMerge: true
...
docker exec -it myk8s-control-plane istioctl proxy-status
NAME CLUSTER CDS LDS EDS RDS ECDS ISTIOD VERSION
ztunnel-25hpt.istio-system Kubernetes IGNORED IGNORED IGNORED IGNORED IGNORED istiod-86b6b7ff7-x4787 1.26.0
ztunnel-4r4d4.istio-system Kubernetes IGNORED IGNORED IGNORED IGNORED IGNORED istiod-86b6b7ff7-x4787 1.26.0
ztunnel-9rzzt.istio-system Kubernetes IGNORED IGNORED IGNORED IGNORED IGNORED istiod-86b6b7ff7-x4787 1.26.0
docker exec -it myk8s-control-plane istioctl ztunnel-config workload
docker exec -it myk8s-control-plane istioctl ztunnel-config service
# iptables 규칙 확인
for node in control-plane worker worker2; do echo "node : myk8s-$node" ; docker exec -it myk8s-$node sh -c 'iptables-save'; echo; done
# NodePort 변경 및 nodeport 30001~30003으로 변경 : prometheus(30001), grafana(30002), kiali(30003), tracing(30004)
kubectl patch svc -n istio-system prometheus -p '{"spec": {"type": "NodePort", "ports": [{"port": 9090, "targetPort": 9090, "nodePort": 30001}]}}'
kubectl patch svc -n istio-system grafana -p '{"spec": {"type": "NodePort", "ports": [{"port": 3000, "targetPort": 3000, "nodePort": 30002}]}}'
kubectl patch svc -n istio-system kiali -p '{"spec": {"type": "NodePort", "ports": [{"port": 20001, "targetPort": 20001, "nodePort": 30003}]}}'
kubectl patch svc -n istio-system tracing -p '{"spec": {"type": "NodePort", "ports": [{"port": 80, "targetPort": 16686, "nodePort": 30004}]}}'
# Prometheus 접속 : envoy, istio 메트릭 확인
open http://127.0.0.1:30001
# Grafana 접속
open http://127.0.0.1:30002
# Kiali 접속 : NodePort
open http://127.0.0.1:30003
# tracing 접속 : 예거 트레이싱 대시보드
open http://127.0.0.1:30004
# ztunnel 파드 확인 : 파드 이름 변수 지정
kubectl get pod -n istio-system -l app=ztunnel -owide
kubectl get pod -n istio-system -l app=ztunnel
ZPOD1NAME=$(kubectl get pod -n istio-system -l app=ztunnel -o jsonpath="{.items[0].metadata.name}")
ZPOD2NAME=$(kubectl get pod -n istio-system -l app=ztunnel -o jsonpath="{.items[1].metadata.name}")
ZPOD3NAME=$(kubectl get pod -n istio-system -l app=ztunnel -o jsonpath="{.items[2].metadata.name}")
echo $ZPOD1NAME $ZPOD2NAME $ZPOD3NAME
#
kubectl describe pod -n istio-system -l app=ztunnel
...
Containers:
istio-proxy:
Container ID: containerd://d81ca867bfd0c505f062ea181a070a8ab313df3591e599a22706a7f4f537ffc5
Image: docker.io/istio/ztunnel:1.26.0-distroless
Image ID: docker.io/istio/ztunnel@sha256:d711b5891822f4061c0849b886b4786f96b1728055333cbe42a99d0aeff36dbe
Port: 15020/TCP
...
Requests:
cpu: 200m
memory: 512Mi
Readiness: http-get http://:15021/healthz/ready delay=0s timeout=1s period=10s #success=1 #failure=3
Environment:
CA_ADDRESS: istiod.istio-system.svc:15012
XDS_ADDRESS: istiod.istio-system.svc:15012
RUST_LOG: info
RUST_BACKTRACE: 1
ISTIO_META_CLUSTER_ID: Kubernetes
INPOD_ENABLED: true
TERMINATION_GRACE_PERIOD_SECONDS: 30
POD_NAME: ztunnel-9rzzt (v1:metadata.name)
POD_NAMESPACE: istio-system (v1:metadata.namespace)
NODE_NAME: (v1:spec.nodeName)
INSTANCE_IP: (v1:status.podIP)
SERVICE_ACCOUNT: (v1:spec.serviceAccountName)
ISTIO_META_ENABLE_HBONE: true
Mounts:
/tmp from tmp (rw)
/var/run/secrets/istio from istiod-ca-cert (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-42r88 (ro)
/var/run/secrets/tokens from istio-token (rw)
/var/run/ztunnel from cni-ztunnel-sock-dir (rw)
...
Volumes:
istio-token:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 43200
istiod-ca-cert:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: istio-ca-root-cert
Optional: false
cni-ztunnel-sock-dir:
Type: HostPath (bare host directory volume)
Path: /var/run/ztunnel
HostPathType: DirectoryOrCreate
...
#
kubectl krew install pexec
kubectl pexec $ZPOD1NAME -it -T -n istio-system -- bash
-------------------------------------------------------
whoami
ip -c addr
ifconfig
iptables -t mangle -S
iptables -t nat -S
ss -tnlp
ss -tnp
ss -xnp
Netid State Recv-Q Send-Q Local Address:Port Peer Address:Port Process
u_str ESTAB 0 0 * 44981 * 44982 users:(("ztunnel",pid=1,fd=13),("ztunnel",pid=1,fd=8),("ztunnel",pid=1,fd=6))
u_seq ESTAB 0 0 /var/run/ztunnel/ztunnel.sock 47646 * 46988
u_str ESTAB 0 0 * 44982 * 44981 users:(("ztunnel",pid=1,fd=7))
u_seq ESTAB 0 0 * 46988 * 47646 users:(("ztunnel",pid=1,fd=19))
ls -l /var/run/ztunnel
total 0
srwxr-xr-x 1 root root 0 Jun 1 04:54 ztunnel.sock
# 메트릭 정보 확인
curl -s http://localhost:15020/metrics
# Viewing Istiod state for ztunnel xDS resources
curl -s http://localhost:15000/config_dump
exit
-------------------------------------------------------
# 아래 ztunnel 파드도 확인해보자
kubectl pexec $ZPOD2NAME -it -T -n istio-system -- bash
kubectl pexec $ZPOD3NAME -it -T -n istio-system -- bash
# 노드에서 기본 정보 확인
for node in control-plane worker worker2; do echo "node : myk8s-$node" ; docker exec -it myk8s-$node sh -c 'ls -l /var/run/ztunnel'; echo; done
# ztunnel 데몬셋 파드 로그 확인
kubectl logs -n istio-system -l app=ztunnel -f
...
# ztunnel 파드 확인 : 파드 이름 변수 지정
kubectl get pod -n istio-system -l app=ztunnel -owide
kubectl get pod -n istio-system -l app=ztunnel
ZPOD1NAME=$(kubectl get pod -n istio-system -l app=ztunnel -o jsonpath="{.items[0].metadata.name}")
ZPOD2NAME=$(kubectl get pod -n istio-system -l app=ztunnel -o jsonpath="{.items[1].metadata.name}")
ZPOD3NAME=$(kubectl get pod -n istio-system -l app=ztunnel -o jsonpath="{.items[2].metadata.name}")
echo $ZPOD1NAME $ZPOD2NAME $ZPOD3NAME
#
kubectl describe pod -n istio-system -l app=ztunnel
...
Containers:
istio-proxy:
Container ID: containerd://d81ca867bfd0c505f062ea181a070a8ab313df3591e599a22706a7f4f537ffc5
Image: docker.io/istio/ztunnel:1.26.0-distroless
Image ID: docker.io/istio/ztunnel@sha256:d711b5891822f4061c0849b886b4786f96b1728055333cbe42a99d0aeff36dbe
Port: 15020/TCP
...
Requests:
cpu: 200m
memory: 512Mi
Readiness: http-get http://:15021/healthz/ready delay=0s timeout=1s period=10s #success=1 #failure=3
Environment:
CA_ADDRESS: istiod.istio-system.svc:15012
XDS_ADDRESS: istiod.istio-system.svc:15012
RUST_LOG: info
RUST_BACKTRACE: 1
ISTIO_META_CLUSTER_ID: Kubernetes
INPOD_ENABLED: true
TERMINATION_GRACE_PERIOD_SECONDS: 30
POD_NAME: ztunnel-9rzzt (v1:metadata.name)
POD_NAMESPACE: istio-system (v1:metadata.namespace)
NODE_NAME: (v1:spec.nodeName)
INSTANCE_IP: (v1:status.podIP)
SERVICE_ACCOUNT: (v1:spec.serviceAccountName)
ISTIO_META_ENABLE_HBONE: true
Mounts:
/tmp from tmp (rw)
/var/run/secrets/istio from istiod-ca-cert (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-42r88 (ro)
/var/run/secrets/tokens from istio-token (rw)
/var/run/ztunnel from cni-ztunnel-sock-dir (rw)
...
Volumes:
istio-token:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 43200
istiod-ca-cert:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: istio-ca-root-cert
Optional: false
cni-ztunnel-sock-dir:
Type: HostPath (bare host directory volume)
Path: /var/run/ztunnel
HostPathType: DirectoryOrCreate
...
#
kubectl krew install pexec
kubectl pexec $ZPOD1NAME -it -T -n istio-system -- bash
-------------------------------------------------------
whoami
ip -c addr
ifconfig
iptables -t mangle -S
iptables -t nat -S
ss -tnlp
ss -tnp
ss -xnp
Netid State Recv-Q Send-Q Local Address:Port Peer Address:Port Process
u_str ESTAB 0 0 * 44981 * 44982 users:(("ztunnel",pid=1,fd=13),("ztunnel",pid=1,fd=8),("ztunnel",pid=1,fd=6))
u_seq ESTAB 0 0 /var/run/ztunnel/ztunnel.sock 47646 * 46988
u_str ESTAB 0 0 * 44982 * 44981 users:(("ztunnel",pid=1,fd=7))
u_seq ESTAB 0 0 * 46988 * 47646 users:(("ztunnel",pid=1,fd=19))
ls -l /var/run/ztunnel
total 0
srwxr-xr-x 1 root root 0 Jun 1 04:54 ztunnel.sock
# 메트릭 정보 확인
curl -s http://localhost:15020/metrics
# Viewing Istiod state for ztunnel xDS resources
curl -s http://localhost:15000/config_dump
exit
-------------------------------------------------------
# 아래 ztunnel 파드도 확인해보자
kubectl pexec $ZPOD2NAME -it -T -n istio-system -- bash
kubectl pexec $ZPOD3NAME -it -T -n istio-system -- bash
# 노드에서 기본 정보 확인
for node in control-plane worker worker2; do echo "node : myk8s-$node" ; docker exec -it myk8s-$node sh -c 'ls -l /var/run/ztunnel'; echo; done
# ztunnel 데몬셋 파드 로그 확인
kubectl logs -n istio-system -l app=ztunnel -f
...
#
docker exec -it myk8s-control-plane ls -l istio-1.26.0
total 40
-rw-r--r-- 1 root root 11357 May 7 11:05 LICENSE
-rw-r--r-- 1 root root 6927 May 7 11:05 README.md
drwxr-x--- 2 root root 4096 May 7 11:05 bin
-rw-r----- 1 root root 983 May 7 11:05 manifest.yaml
drwxr-xr-x 4 root root 4096 May 7 11:05 manifests
drwxr-xr-x 27 root root 4096 May 7 11:05 samples
drwxr-xr-x 3 root root 4096 May 7 11:05 tools
# Deploy the Bookinfo sample application:
docker exec -it myk8s-control-plane kubectl apply -f istio-1.26.0/samples/bookinfo/platform/kube/bookinfo.yaml
# 확인
kubectl get deploy,pod,svc,ep
docker exec -it myk8s-control-plane istioctl ztunnel-config service
docker exec -it myk8s-control-plane istioctl ztunnel-config workload
docker exec -it myk8s-control-plane istioctl proxy-status
# 통신 확인 : ratings 에서 productpage 페이지
kubectl exec "$(kubectl get pod -l app=ratings -o jsonpath='{.items[0].metadata.name}')" -c ratings -- curl -sS productpage:9080/productpage | grep -o "<title>.*</title>"
# 요청 테스트용 파드 생성 : netshoot
kubectl create sa netshoot
cat << EOF | kubectl apply -f -
apiVersion: v1
kind: Pod
metadata:
name: netshoot
spec:
serviceAccountName: netshoot
nodeName: myk8s-control-plane
containers:
- name: netshoot
image: nicolaka/netshoot
command: ["tail"]
args: ["-f", "/dev/null"]
terminationGracePeriodSeconds: 0
EOF
# 요청 확인
kubectl exec -it netshoot -- curl -sS productpage:9080/productpage | grep -i title
# 반복 요청
while true; do kubectl exec -it netshoot -- curl -sS productpage:9080/productpage | grep -i title ; date "+%Y-%m-%d %H:%M:%S"; sleep 1; done
#
docker exec -it myk8s-control-plane cat istio-1.26.0/samples/bookinfo/gateway-api/bookinfo-gateway.yaml
apiVersion: gateway.networking.k8s.io/v1
kind: Gateway
metadata:
name: bookinfo-gateway
spec:
gatewayClassName: istio
listeners:
- name: http
port: 80
protocol: HTTP
allowedRoutes:
namespaces:
from: Same
---
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
name: bookinfo
spec:
parentRefs:
- name: bookinfo-gateway
rules:
- matches:
- path:
type: Exact
value: /productpage
- path:
type: PathPrefix
value: /static
- path:
type: Exact
value: /login
- path:
type: Exact
value: /logout
- path:
type: PathPrefix
value: /api/v1/products
backendRefs:
- name: productpage
port: 9080
docker exec -it myk8s-control-plane kubectl apply -f istio-1.26.0/samples/bookinfo/gateway-api/bookinfo-gateway.yaml
# 확인
kubectl get gateway
NAME CLASS ADDRESS PROGRAMMED AGE
bookinfo-gateway istio 172.18.255.201 True 75s
kubectl get HTTPRoute
NAME HOSTNAMES AGE
bookinfo 101s
kubectl get svc,ep bookinfo-gateway-istio
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/bookinfo-gateway-istio LoadBalancer 10.200.1.122 172.18.255.201 15021:30870/TCP,80:31570/TCP 2m37s
NAME ENDPOINTS AGE
endpoints/bookinfo-gateway-istio 10.10.1.6:15021,10.10.1.6:80 2m37s
kubectl get pod -l gateway.istio.io/managed=istio.io-gateway-controller -owide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
bookinfo-gateway-istio-6cbd9bcd49-fwqqp 1/1 Running 0 3m45s 10.10.1.6 myk8s-worker2 <none> <none>
# 접속 확인
docker ps
kubectl get svc bookinfo-gateway-istio -o jsonpath='{.status.loadBalancer.ingress[0].ip}'
GWLB=$(kubectl get svc bookinfo-gateway-istio -o jsonpath='{.status.loadBalancer.ingress[0].ip}')
docker exec -it mypc curl $GWLB/productpage -v
docker exec -it mypc curl $GWLB/productpage -I
# 반복 요청 : 아래 mypc 컨테이너에서 반복 요청 계속 해두기!
GWLB=$(kubectl get svc bookinfo-gateway-istio -o jsonpath='{.status.loadBalancer.ingress[0].ip}')
while true; do docker exec -it mypc curl $GWLB/productpage | grep -i title ; date "+%Y-%m-%d %H:%M:%S"; sleep 1; done
# 자신의 로컬 PC에서 접속 시도
kubectl patch svc bookinfo-gateway-istio -p '{"spec": {"type": "LoadBalancer", "ports": [{"port": 80, "targetPort": 80, "nodePort": 30000}]}}'
kubectl get svc bookinfo-gateway-istio
open "http://127.0.0.1:30000/productpage"
# 반복 요청
while true; do curl -s http://127.0.0.1:30000/productpage | grep -i title ; date "+%Y-%m-%d %H:%M:%S"; sleep 1; done
Adding your application to ambient- Docs
- The namespace or pod has the label istio.io/dataplane-mode=ambient
- The pod does not have the opt-out label istio.io/dataplane-mode=none
# 디폴트 네임스페이서 모든 파드들에 ambient mesh 통신 적용 설정
# You can enable all pods in a given namespace to be part of the ambient mesh by simply labeling the namespace:
kubectl label namespace default istio.io/dataplane-mode=ambient
# 파드 정보 확인 : 사이트카가 없다! , 파드 수명 주기에 영향도 없다! -> mTLS 암호 통신 제공, L4 텔레메트리(메트릭) 제공
docker exec -it myk8s-control-plane istioctl proxy-status
kubectl get pod
#
docker exec -it myk8s-control-plane istioctl ztunnel-config workload
NAMESPACE POD NAME ADDRESS NODE WAYPOINT PROTOCOL
default details-v1-766844796b-nfsh8 10.10.2.16 myk8s-worker None HBONE
default netshoot 10.10.0.7 myk8s-control-plane None HBONE
default productpage-v1-54bb874995-xkq54 10.10.2.20 myk8s-worker None HBONE
...
docker exec -it myk8s-control-plane istioctl ztunnel-config workload --address 10.10.2.20
NAMESPACE POD NAME ADDRESS NODE WAYPOINT PROTOCOL
default productpage-v1-54bb874995-xkq54 10.10.2.20 myk8s-worker None HBONE
docker exec -it myk8s-control-plane istioctl ztunnel-config workload --address 10.10.2.20 -o json
[
{
"uid": "Kubernetes//Pod/default/productpage-v1-54bb874995-xkq54",
"workloadIps": [
"10.10.2.20"
],
"protocol": "HBONE",
"name": "productpage-v1-54bb874995-xkq54",
"namespace": "default",
"serviceAccount": "bookinfo-productpage",
"workloadName": "productpage-v1",
"workloadType": "pod",
"canonicalName": "productpage",
"canonicalRevision": "v1",
"clusterId": "Kubernetes",
"trustDomain": "cluster.local",
"locality": {},
"node": "myk8s-worker",
"status": "Healthy",
...
#
PPOD=$(kubectl get pod -l app=productpage -o jsonpath='{.items[0].metadata.name}')
kubectl pexec $PPOD -it -T -- bash
-------------------------------------------------------
iptables-save
iptables -t mangle -S
iptables -t nat -S
ss -tnlp
ss -tnp
ss -xnp
ls -l /var/run/ztunnel
# 메트릭 정보 확인
curl -s http://localhost:15020/metrics | grep '^[^#]'
...
# Viewing Istiod state for ztunnel xDS resources
curl -s http://localhost:15000/config_dump
exit
-------------------------------------------------------
# 노드에서 ipset 확인 : 파드들의 ip를 멤버로 관리 확인
for node in control-plane worker worker2; do echo "node : myk8s-$node" ; docker exec -it myk8s-$node ipset list; echo; done
...
Members:
10.10.2.15 comment "57009539-36bb-42e0-bdac-4c2356fabbd3"
10.10.2.19 comment "64f46320-b85d-4e10-ab97-718d4e282116"
10.10.2.17 comment "b5ff9b4c-722f-48ef-a789-a95485fe9fa8"
10.10.2.18 comment "c6b368f7-6b6f-4d1d-8dbe-4ee85d7c9c22"
10.10.2.16 comment "cd1b1016-5570-4492-ba3a-4299790029d9"
10.10.2.20 comment "2b97238b-3b37-4a13-87a2-70755cb225e6"
# istio-cni-node 로그 확인
kubectl -n istio-system logs -l k8s-app=istio-cni-node -f
...
# ztunnel 파드 로그 모니터링 : IN/OUT 트래픽 정보
kubectl -n istio-system logs -l app=ztunnel -f | egrep "inbound|outbound"
2025-06-01T06:37:49.162266Z info access connection complete src.addr=10.10.2.20:48392 src.workload="productpage-v1-54bb874995-xkq54" src.namespace="default" src.identity="spiffe://cluster.local/ns/default/sa/bookinfo-productpage" dst.addr=10.10.2.18:15008 dst.hbone_addr=10.10.2.18:9080 dst.service="reviews.default.svc.cluster.local" dst.workload="reviews-v2-556d6457d-4r8n8" dst.namespace="default" dst.identity="spiffe://cluster.local/ns/default/sa/bookinfo-reviews" direction="inbound" bytes_sent=602 bytes_recv=192 duration="31ms"
2025-06-01T06:37:49.162347Z info access connection complete src.addr=10.10.2.20:53412 src.workload="productpage-v1-54bb874995-xkq54" src.namespace="default" src.identity="spiffe://cluster.local/ns/default/sa/bookinfo-productpage" dst.addr=10.10.2.18:15008 dst.hbone_addr=10.10.2.18:9080 dst.service="reviews.default.svc.cluster.local" dst.workload="reviews-v2-556d6457d-4r8n8" dst.namespace="default" dst.identity="spiffe://cluster.local/ns/default/sa/bookinfo-reviews" direction="outbound" bytes_sent=192 bytes_recv=602 duration="31ms"
# ztunnel 파드 확인 : 파드 이름 변수 지정
kubectl get pod -n istio-system -l app=ztunnel -owide
kubectl get pod -n istio-system -l app=ztunnel
ZPOD1NAME=$(kubectl get pod -n istio-system -l app=ztunnel -o jsonpath="{.items[0].metadata.name}")
ZPOD2NAME=$(kubectl get pod -n istio-system -l app=ztunnel -o jsonpath="{.items[1].metadata.name}")
ZPOD3NAME=$(kubectl get pod -n istio-system -l app=ztunnel -o jsonpath="{.items[2].metadata.name}")
echo $ZPOD1NAME $ZPOD2NAME $ZPOD3NAME
#
kubectl pexec $ZPOD1NAME -it -T -n istio-system -- bash
-------------------------------------------------------
iptables -t mangle -S
iptables -t nat -S
ss -tnlp
ss -tnp
ss -xnp
ls -l /var/run/ztunnel
# 메트릭 정보 확인
curl -s http://localhost:15020/metrics | grep '^[^#]'
...
# Viewing Istiod state for ztunnel xDS resources
curl -s http://localhost:15000/config_dump
exit
-------------------------------------------------------
# netshoot 파드만 ambient mode 에서 제외해보자
docker exec -it myk8s-control-plane istioctl ztunnel-config workload
kubectl label pod netshoot istio.io/dataplane-mode=none
docker exec -it myk8s-control-plane istioctl ztunnel-config workload
NAMESPACE POD NAME ADDRESS NODE WAYPOINT PROTOCOL
default netshoot 10.10.0.7 myk8s-control-plane None TCP
PS.
기존 사이드카 구성에서 ambient mode로 변경하려면... 어떡해야할까..
https://ambientmesh.io/docs/setup/sidecar-migration/
Migrating to ambient mesh from Istio in sidecar mode
The Solo ambient mesh migration tool provides a prescriptive path for migrating from Istio’s sidecar mode to ambient mode. This migration is zero-downtime when used with the free Solo builds of Istio, but can also translate Kubernetes manifests for users
ambientmesh.io
ztunnel-config
# A group of commands used to update or retrieve Ztunnel configuration from a Ztunnel instance.
docker exec -it myk8s-control-plane istioctl ztunnel-config
all Retrieves all configuration for the specified Ztunnel pod.
certificate Retrieves certificate for the specified Ztunnel pod.
connections Retrieves connections for the specified Ztunnel pod.
log Retrieves logging levels of the Ztunnel instance in the specified pod.
policy Retrieves policies for the specified Ztunnel pod.
service Retrieves services for the specified Ztunnel pod.
workload Retrieves workload configuration for the specified Ztunnel pod.
...
docker exec -it myk8s-control-plane istioctl ztunnel-config service
NAMESPACE SERVICE NAME SERVICE VIP WAYPOINT ENDPOINTS
default bookinfo-gateway-istio 10.200.1.122 None 1/1
default details 10.200.1.202 None 1/1
default kubernetes 10.200.1.1 None 1/1
default productpage 10.200.1.207 None 1/1
default ratings 10.200.1.129 None 1/1
default reviews 10.200.1.251 None 3/3
...
docker exec -it myk8s-control-plane istioctl ztunnel-config service --service-namespace default --node myk8s-worker
docker exec -it myk8s-control-plane istioctl ztunnel-config service --service-namespace default --node myk8s-worker2
docker exec -it myk8s-control-plane istioctl ztunnel-config service --service-namespace default --node myk8s-worker2 -o json
...
{
"name": "productpage",
"namespace": "default",
"hostname": "productpage.default.svc.cluster.local",
"vips": [
"/10.200.1.207"
],
"ports": {
"9080": 9080
},
"endpoints": {
"Kubernetes//Pod/default/productpage-v1-54bb874995-xkq54": {
"workloadUid": "Kubernetes//Pod/default/productpage-v1-54bb874995-xkq54",
"service": "",
"port": {
"9080": 9080
}
}
},
"ipFamilies": "IPv4"
},
...
docker exec -it myk8s-control-plane istioctl ztunnel-config workload
docker exec -it myk8s-control-plane istioctl ztunnel-config workload --workload-namespace default
docker exec -it myk8s-control-plane istioctl ztunnel-config workload --workload-namespace default --node myk8s-worker2
docker exec -it myk8s-control-plane istioctl ztunnel-config workload --workload-namespace default --node myk8s-worker -o json
...
{
"uid": "Kubernetes//Pod/default/productpage-v1-54bb874995-xkq54",
"workloadIps": [
"10.10.2.20"
],
"protocol": "HBONE",
"name": "productpage-v1-54bb874995-xkq54",
"namespace": "default",
"serviceAccount": "bookinfo-productpage",
"workloadName": "productpage-v1",
"workloadType": "pod",
"canonicalName": "productpage",
"canonicalRevision": "v1",
"clusterId": "Kubernetes",
"trustDomain": "cluster.local",
"locality": {},
"node": "myk8s-worker",
"status": "Healthy",
"hostname": "",
"capacity": 1,
"applicationTunnel": {
"protocol": ""
}
},
...
docker exec -it myk8s-control-plane istioctl ztunnel-config certificate --node myk8s-worker
CERTIFICATE NAME TYPE STATUS VALID CERT SERIAL NUMBER NOT AFTER NOT BEFORE
spiffe://cluster.local/ns/default/sa/bookinfo-details Leaf Available true 6399f0ac1d1f508088c731791930a03a 2025-06-02T06:21:46Z 2025-06-01T06:19:46Z
spiffe://cluster.local/ns/default/sa/bookinfo-details Root Available true 8a432683a288fb4b61d5775dcf47019d 2035-05-30T04:53:58Z 2025-06-01T04:53:58Z
spiffe://cluster.local/ns/default/sa/bookinfo-productpage Leaf Available true f727d33914e23029b3c889c67efe12e1 2025-06-02T06:21:46Z 2025-06-01T06:19:46Z
spiffe://cluster.local/ns/default/sa/bookinfo-productpage Root Available true 8a432683a288fb4b61d5775dcf47019d 2035-05-30T04:53:58Z 2025-06-01T04:53:58Z
spiffe://cluster.local/ns/default/sa/bookinfo-ratings Leaf Available true d1af8176f5047f620d7795a4775869ad 2025-06-02T06:21:46Z 2025-06-01T06:19:46Z
spiffe://cluster.local/ns/default/sa/bookinfo-ratings Root Available true 8a432683a288fb4b61d5775dcf47019d 2035-05-30T04:53:58Z 2025-06-01T04:53:58Z
spiffe://cluster.local/ns/default/sa/bookinfo-reviews Leaf Available true af013ce3f7dca5dc1bead36455155d65 2025-06-02T06:21:46Z 2025-06-01T06:19:46Z
spiffe://cluster.local/ns/default/sa/bookinfo-reviews Root Available true 8a432683a288fb4b61d5775dcf47019d 2035-05-30T04:53:58Z 2025-06-01T04:53:58Z
docker exec -it myk8s-control-plane istioctl ztunnel-config certificate --node myk8s-worker -o json
...
docker exec -it myk8s-control-plane istioctl ztunnel-config connections --node myk8s-worker
WORKLOAD DIRECTION LOCAL REMOTE REMOTE TARGET PROTOCOL
productpage-v1-54bb874995-xkq54.default Inbound productpage-v1-54bb874995-xkq54.default:9080 bookinfo-gateway-istio-6cbd9bcd49-fwqqp.default:33052 HBONE
ratings-v1-5dc79b6bcd-64kcm.default Inbound ratings-v1-5dc79b6bcd-64kcm.default:9080 reviews-v2-556d6457d-4r8n8.default:56440 ratings.default.svc.cluster.local HBONE
reviews-v2-556d6457d-4r8n8.default Outbound reviews-v2-556d6457d-4r8n8.default:41722 ratings-v1-5dc79b6bcd-64kcm.default:15008 ratings.default.svc.cluster.local:9080 HBONE
docker exec -it myk8s-control-plane istioctl ztunnel-config connections --node myk8s-worker --raw
WORKLOAD DIRECTION LOCAL REMOTE REMOTE TARGET PROTOCOL
productpage-v1-54bb874995-xkq54.default Inbound 10.10.2.20:9080 10.10.1.6:33064 HBONE
productpage-v1-54bb874995-xkq54.default Inbound 10.10.2.20:9080 10.10.1.6:33052 HBONE
ratings-v1-5dc79b6bcd-64kcm.default Inbound 10.10.2.15:9080 10.10.2.18:56440 ratings.default.svc.cluster.local HBONE
ratings-v1-5dc79b6bcd-64kcm.default Inbound 10.10.2.15:9080 10.10.2.19:56306 ratings.default.svc.cluster.local HBONE
reviews-v2-556d6457d-4r8n8.default Outbound 10.10.2.18:45530 10.10.2.15:15008 10.200.1.129:9080 HBONE
reviews-v3-564544b4d6-nmf92.default Outbound 10.10.2.19:56168 10.10.2.15:15008 10.200.1.129:9080 HBONE
docker exec -it myk8s-control-plane istioctl ztunnel-config connections --node myk8s-worker -o json
...
{
"state": "Up",
"connections": {
"inbound": [
{
"src": "10.10.2.18:56440",
"originalDst": "ratings.default.svc.cluster.local",
"actualDst": "10.10.2.15:9080",
"protocol": "HBONE"
},
{
"src": "10.10.2.19:56306",
"originalDst": "ratings.default.svc.cluster.local",
"actualDst": "10.10.2.15:9080",
"protocol": "HBONE"
}
],
"outbound": []
},
"info": {
"name": "ratings-v1-5dc79b6bcd-64kcm",
"namespace": "default",
"trustDomain": "",
"serviceAccount": "bookinfo-ratings"
}
},
...
#
docker exec -it myk8s-control-plane istioctl ztunnel-config policy
NAMESPACE POLICY NAME ACTION SCOPE
#
docker exec -it myk8s-control-plane istioctl ztunnel-config log
ztunnel-25hpt.istio-system:
current log level is hickory_server::server::server_future=off,info
...
Secure Application Access : L4 Authorization Policy - Docs
Istio의 앰비언트 모드에서는 ztunnel을 통해 L4 보안 정책을 지원하며 호환 CNI 플러그인 기반 쿠버네티스 네트워크 정책과의 병행 사용이 가능하고, ztunnel과 웨이포인트 프록시의 계층화 구조를 통해 워크로드 단위로 L7 처리 여부를 선택적으로 적용할 수 있으며 정책 시행 지점이 다중화되는 특성이 있다.
# netshoot 파드만 ambient mode 다시 참여
docker exec -it myk8s-control-plane istioctl ztunnel-config workload
kubectl label pod netshoot istio.io/dataplane-mode=ambient --overwrite
docker exec -it myk8s-control-plane istioctl ztunnel-config workload
# L4 Authorization Policy 신규 생성
# Explicitly allow the netshoot and gateway service accounts to call the productpage service:
kubectl apply -f - <<EOF
apiVersion: security.istio.io/v1
kind: AuthorizationPolicy
metadata:
name: productpage-viewer
namespace: default
spec:
selector:
matchLabels:
app: productpage
action: ALLOW
rules:
- from:
- source:
principals:
- cluster.local/ns/default/sa/netshoot
EOF
# L4 Authorization Policy 생성 확인
kubectl get authorizationpolicy
NAME AGE
productpage-viewer 8s
# ztunnel 파드 로그 모니터링
kubectl logs ds/ztunnel -n istio-system -f | grep -E RBAC
# L4 Authorization Policy 동작 확인
## 차단 확인!
GWLB=$(kubectl get svc bookinfo-gateway-istio -o jsonpath='{.status.loadBalancer.ingress[0].ip}')
while true; do docker exec -it mypc curl $GWLB/productpage | grep -i title ; date "+%Y-%m-%d %H:%M:%S"; sleep 1; done
## 허용 확인!
kubectl exec -it netshoot -- curl -sS productpage:9080/productpage | grep -i title
while true; do kubectl exec -it netshoot -- curl -sS productpage:9080/productpage | grep -i title ; date "+%Y-%m-%d %H:%M:%S"; sleep 1; done
# L4 Authorization Policy 업데이트
kubectl apply -f - <<EOF
apiVersion: security.istio.io/v1
kind: AuthorizationPolicy
metadata:
name: productpage-viewer
namespace: default
spec:
selector:
matchLabels:
app: productpage
action: ALLOW
rules:
- from:
- source:
principals:
- cluster.local/ns/default/sa/netshoot
- cluster.local/ns/default/sa/bookinfo-gateway-istio
EOF
kubectl logs ds/ztunnel -n istio-system -f | grep -E RBAC
# 허용 확인!
GWLB=$(kubectl get svc bookinfo-gateway-istio -o jsonpath='{.status.loadBalancer.ingress[0].ip}')
while true; do docker exec -it mypc curl $GWLB/productpage | grep -i title ; date "+%Y-%m-%d %H:%M:%S"; sleep 1; done

Configure waypoint proxies - Docs
Istio 앰비언트 모드는 **ztunnel(L4)**과 **웨이포인트 프록시(L7)**의 계층적 구조로 운영되며, ztunnel은 노드 단위 mTLS 암호화 및 L4 트래픽 관리를 담당하고 웨이포인트는 선택적 L7 기능(HTTP 라우팅/정책/메트릭)을 처리한다. 웨이포인트 프록시는 네임스페이스 단위로 배포되며 여러 워크로드에서 공유 가능해 사이드카 대비 90% 이상의 리소스 절감 효과를 제공한다. L7 기능 필요 시에만 웨이포인트를 활성화할 수 있어 점진적 도입이 가능하며, HTTP 기반 트래픽 관리/보안 정책/관측성이 필요한 경우에 한해 배포딘다. 정책 시행 지점이 목적지 측 웨이포인트로 이동해 중앙 집중식 정책 관리가 가능하며, HBONE 프로토콜을 통해 ztunnel-웨이포인트 간 트래픽을 안전하게 전달한다. 앰비언트 모드 전환 시 기존 사이드카 구성과의 혼용 운영이 가능하며, CNI 플러그인과의 호환성을 통해 쿠버네티스 네트워크 정책과의 병행 사용이 지원된다.
# istioctl can generate a Kubernetes Gateway resource for a waypoint proxy.
# For example, to generate a waypoint proxy named waypoint for the default namespace that can process traffic for services in the namespace:
kubectl describe pod bookinfo-gateway-istio-6cbd9bcd49-6cphf | grep 'Service Account'
Service Account: bookinfo-gateway-istio
# Generate a waypoint configuration as YAML
docker exec -it myk8s-control-plane istioctl waypoint generate -h
# --for string Specify the traffic type [all none service workload] for the waypoint
istioctl waypoint generate --for service -n default
apiVersion: gateway.networking.k8s.io/v1
kind: Gateway
metadata:
labels:
istio.io/waypoint-for: service
name: waypoint
namespace: default
spec:
gatewayClassName: istio-waypoint
listeners:
- name: mesh
port: 15008
protocol: HBONE
#
docker exec -it myk8s-control-plane istioctl waypoint apply -n default
✅ waypoint default/waypoint applied
kubectl get gateway
kubectl get gateway waypoint -o yaml
...
#
kubectl get pod -l service.istio.io/canonical-name=waypoint -owide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
waypoint-66b59898-p7v5x 1/1 Running 0 2m15s 10.10.1.21 myk8s-worker <none> <none>
#
docker exec -it myk8s-control-plane istioctl waypoint list
NAME REVISION PROGRAMMED
waypoint default True
docker exec -it myk8s-control-plane istioctl waypoint status
NAMESPACE NAME STATUS TYPE REASON MESSAGE
default waypoint True Programmed Programmed Resource programmed, assigned to service(s) waypoint.default.svc.cluster.local:15008
docker exec -it myk8s-control-plane istioctl proxy-status
NAME CLUSTER CDS LDS EDS RDS ECDS ISTIOD VERSION
bookinfo-gateway-istio-6cbd9bcd49-6cphf.default Kubernetes SYNCED (5m5s) SYNCED (5m5s) SYNCED (5m4s) SYNCED (5m5s) IGNORED istiod-86b6b7ff7-gmtdw 1.26.0
waypoint-66b59898-p7v5x.default Kubernetes SYNCED (5m4s) SYNCED (5m4s) IGNORED IGNORED IGNORED istiod-86b6b7ff7-gmtdw 1.26.0
ztunnel-52d22.istio-system Kubernetes IGNORED IGNORED IGNORED IGNORED IGNORED istiod-86b6b7ff7-gmtdw 1.26.0
ztunnel-ltckp.istio-system Kubernetes IGNORED IGNORED IGNORED IGNORED IGNORED istiod-86b6b7ff7-gmtdw 1.26.0
ztunnel-mg4mn.istio-system Kubernetes IGNORED IGNORED IGNORED IGNORED IGNORED istiod-86b6b7ff7-gmtdw 1.26.0
docker exec -it myk8s-control-plane istioctl proxy-config secret deploy/waypoint
RESOURCE NAME TYPE STATUS VALID CERT SERIAL NUMBER NOT AFTER NOT BEFORE
default Cert Chain ACTIVE true 4346296f183e559a876c0410cb154f50 2025-06-02T10:11:45Z 2025-06-01T10:09:45Z
ROOTCA CA ACTIVE true aa779aa04b241aedaa8579c0bc5c3b5f 2035-05-30T09:19:54Z 2025-06-01T09:19:54Z
#
kubectl pexec waypoint-66b59898-dlrj4 -it -T -- bash
----------------------------------------------
ip -c a
curl -s http://localhost:15020/stats/prometheus
ss -tnlp
State Recv-Q Send-Q Local Address:Port Peer Address:Port Process
LISTEN 0 4096 127.0.0.1:15000 0.0.0.0:* users:(("envoy",pid=18,fd=18))
LISTEN 0 4096 0.0.0.0:15008 0.0.0.0:* users:(("envoy",pid=18,fd=35))
LISTEN 0 4096 0.0.0.0:15008 0.0.0.0:* users:(("envoy",pid=18,fd=34))
LISTEN 0 4096 0.0.0.0:15021 0.0.0.0:* users:(("envoy",pid=18,fd=23))
LISTEN 0 4096 0.0.0.0:15021 0.0.0.0:* users:(("envoy",pid=18,fd=22))
LISTEN 0 4096 0.0.0.0:15090 0.0.0.0:* users:(("envoy",pid=18,fd=21))
LISTEN 0 4096 0.0.0.0:15090 0.0.0.0:* users:(("envoy",pid=18,fd=20))
LISTEN 0 4096 *:15020 *:* users:(("pilot-agent",pid=1,fd=11))
ss -tnp
ss -xnlp
ss -xnp
exit
----------------------------------------------
이로써 9주간의 istio 스터디가 끝났다.
가시다 님 및 운영진님들 고생하셨고 좋은 스터디 제공해주셔서 감사합니다.