이 글의 목적 : git common flow(gitlab flow)를 제대로 사용할 수 있도록 공식 문서를 번역하고 이해하기

참고 URL : https://commonflow.org/spec/1.0.0-rc.5.html

 

 

Introduction

Common-Flow is an attempt to gather a sensible selection of the most common usage patterns of git into a single and concise specification. It is based on the original variant of GitHub Flow, while taking into account how a lot of open source projects most commonly use git.

In short, Common-Flow is essentially GitHub Flow with the addition of versioned releases, optional release branches, and without the requirement to deploy to production all the time.

 

Git에서 가장 흔히 사용되는 패턴들을 효과적으로 처리할수 있또록 Github flow에 기반하여 다른 브랜치 전략(git flow)의 장점을 모은 브랜치전략이다.

 

Summary

  • The "master" branch is the mainline branch with latest changes, and must not be broken.
  • Changes (features, bugfixes, etc.) are done on "change branches" created from the master branch.
  • Rebase change branches early and often.
  • When a change branch is stable and ready, it is merged back in to master.
  • A release is just a git tag who's name is the exact release version string (e.g. "2.11.4").
  • Release branches can be used to avoid change freezes on master. They are not required, instead they are available if you need them.

"메인 브랜치"는 최신 변경 사항이 있는 메인라인 브랜치이며 중단되면 안됨.

개발작업은 마스터 브랜치에서 생성된 change branch에서 진행.

 

 

 

 

Terminology

  • Master Branch - Must be named "master", must always have passing tests, and is not guaranteed to always work in production environments.
  • Change Branches - Any branch that introduces changes like a new feature, a bug fix, etc.
  • Source Branch - The branch that a change branch was created from. New changes in the source branch should be incorporated into the change branch via rebasing.
  • Merge Target - A branch that is the intended merge target for a change branch. Typically the merge target branch will be the same as the source branch.
  • Pull Request - A means of requesting that a change branch is merged in to its merge target, allowing others to review, discuss and approve the changes.
  • Release - May be considered safe to use in production environments. Is effectively just a git tag named after the version of the release.
  • Release Branches - Used both for short-term preparations of a release, and also for long-term maintenance of older version.

 

 

Git Common-Flow Specification (Common-Flow)

The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this document are to be interpreted as described in RFC 2119.

  1. TL;DR
    1. Do not break the master branch.
    2. A release is a git tag.

 

  1. The Master Branch
    1. A branch named "master" MUST exist and it MUST be referred to as the "master branch".
    2. The master branch MUST always be in a non-broken state with its test suite passing.
    3. The master branch IS NOT guaranteed to always work in production environments. Despite test suites passing it may at times contain unfinished work. Only releases may be considered safe for production use.
    4. The master branch SHOULD always be in a "as near as possibly ready for release/production" state to reduce any friction with creating a new release.

 

  1. Change Branches
    1. Each change (feature, bugfix, etc.) MUST be performed on separate branches that SHOULD be referred to as "change branches".
    2. All change branches MUST have descriptive names.
    3. It is RECOMMENDED that you commit often locally, and that you try and keep the commits reasonably structured to avoid a messy and confusing git history.
    4. You SHOULD regularly push your work to the same named branch on the remote server.
    5. You SHOULD create separate change branches for each distinctly different change. You SHOULD NOT include multiple unrelated changes into a single change branch.
    6. When a change branch is created, the branch that it is created from SHOULD be referred to as the "source branch". Each change branch also needs a designated "merge target" branch, typically this will be the same as the source branch.
    7. Change branches MUST be regularly updated with any changes from their source branch. This MUST be done by rebasing the change branch on top of the source branch.
    8. After updating a change branch from its source branch you MUST push the change branch to the remote server. Due to the nature of rebasing, you will be required to do a force push, and you MUST use the "--force-with-lease" git push option when doing so instead of the regular "--force".
    9. If there is a truly valid technical reason to not use rebase when updating change branches, then you can update change branches via merge instead of rebase. The decision to use merge MUST only be taken after all possible options to use rebase have been tried and failed. People not understanding how to use rebase is NOT a valid reason to use merge. If you do decide to use merge instead of rebase, you MUST NOT use a mixture of both methods, pick one and stick to it.

 

  1. Pull Requests
    1. To merge a change branch into its merge target, you MUST open a "pull request" (or equivalent).
    2. The purpose of a pull request is to allow others to review your changes and give feedback. You can then fix any issues, complaints, and more that might arise, and then let people review again.
    3. Before creating a pull request, it is RECOMMENDED that you consider the state of your change branch's commit history. If it is messy and confusing, it might be a good idea to rebase your branch with "git rebase -i" to present a cleaner and easier to follow commit history for your reviewers.
    4. A pull request MUST only be merged when the change branch is up-to-date with its source branch, the test suite is passing, and you and others are happy with the change. This is especially important if the merge target is the master branch.
    5. To get feedback, help, or generally just discuss a change branch with others, it is RECOMMENDED you create a pull request and discuss the changes with others there. This leaves a clear and visible history of how, when, and why the code looks and behaves the way it does.

 

  1. Versioning
    1. A "version string" is a typically mostly numeric string that identifies a specific version of a project. The version string itself MUST NOT have a "v" prefix, but the version string can be displayed with a "v" prefix to indicate it is a version that is being referred to.
    2. The source of truth for a project's version MUST be a git tag with a name based on the version string. This kind of tag MUST be referred to as a "release tag".
    3. It is OPTIONAL, but RECOMMENDED to also keep the version string hard-coded somewhere in the project code-base.
    4. If you hard-code the version string into the code-base, it is RECOMMENDED that you do so in a file called "VERSION" located in the root of the project. But be mindful of the conventions of your programming language and community when choosing if, where and how to hard-code the version string.
    5. If you are using a "VERSION" file in the root of the project, this file MUST only contain the exact version string, meaning it MUST NOT have a "v" prefix. For example "v2.11.4" is bad, and "2.11.4" is good.
    6. It is OPTIONAL, but RECOMMENDED that that the version string follows Semantic Versioning (http://semver.org/).

 

  1. Releases
    1. To create a new release, you MUST create a git tag named as the exact version string of the release. This kind of tag MUST be referred to as a "release tag".
    2. The release tag name can OPTIONALLY be prefixed with "v". For example the tag name can be either "2.11.4" or "v2.11.4". It is however RECOMMENDED that you do not use a "v" prefix. You MUST NOT use a mixture of "v" prefixed and non-prefixed tags. Pick one form and stick to it.
    3. If the version string is hard-coded into the code-base, you MUST create a "version bump" commit which changes the hard-coded version string of the project.
    4. When using version bump commits, the release tag MUST be placed on the version bump commit.
    5. If you are not using a release branch, then the release tag, and if relevant the version bump commit, MUST be created directly on the master branch.
    6. The version bump commit SHOULD have a commit message title of "Bump version to VERSION". For example, if the new version string is "2.11.4", the first line of the commit message SHOULD read: "Bump version to 2.11.4"
    7. It is RECOMMENDED that release tags are lightweight tags, but you can OPTIONALLY use annotated tags if you want to include changelog information in the release tag itself.
    8. If you use annotated release tags, the first line of the annotation SHOULD read "Release VERSION". For example for version "2.11.4" the first line of the tag annotation SHOULD read "Release 2.11.4". The second line MUST be blank, and the changelog MUST start on the third line.

 

  1. Short-Term Release Branches
    1. Any branch that has a name starting with "release-" SHOULD be referred to as a "release branch".
    2. Any release branch which has a name ending with a specific version string, MUST be referred to as a "short-term release branch".
    3. Use of short-term release branches are OPTIONAL, and intended to be used to create a specific versioned release.
    4. A short-term release branch is RECOMMENDED if there is a lengthy pre-release verification process to avoid a code freeze on the master branch.
    5. Short-term release branches MUST have a name of "release-VERSION". For example for version "2.11.4" the release branch name MUST be "release-2.11.4".
    6. When using a short-term release branch to create a release, the release tag and if used, version bump commit, MUST be placed directly on the short-term release branch itself.
    7. Only very minor changes should be performed on a short-term release branch directly. Any larger changes SHOULD be done in the master branch, and SHOULD be pulled into the release branch by rebasing it on top of the master branch the same way a change branch pulls in updates from its source branch.
    8. After a release tag has been created, the release branch MUST be merged back into its source branch and then deleted. Typically the source branch will be the master branch.

 

  1. Long-term Release Branches
    1. Any release branch which has a name ending with a non-specific version string, MUST be referred to as a "long-term release branch". For example "release-2.11" is a long-term release branch, while "release-2.11.4" is a short-term release branch.
    2. Use of long-term release branches are OPTIONAL, and intended for work on versions which are not currently part of the master branch. Typically this is useful when you need to create a new maintenance release for a older version.
    3. A long-term release branch MUST have a name with a non-specific version number. For example a long-term release branch for creating new 2.9.x releases MUST be named "release-2.9".
    4. Long-term release branches for maintenance releases of older versions MUST be created from the relevant release tag. For example if the master branch is on version 2.11.4 and there is a security fix for all 2.9.x releases, the latest of which is "2.9.7". Create a new branch called "release-2.9" from the "2.9.7" release tag. The security fix release will then end up being version "2.9.8".
    5. To create a new release from a long-term release branch, you MUST follow the same process as a release from the master branch, except the long-term release branch takes the place of the master branch.
    6. A long-term release branch should be treated with the same respect as the master branch. It is effectively the master branch for the release series in question. Meaning it MUST always be in a non-broken state, MUST NOT be force pushed to, etc.

 

  1. Bug Fixes & Rollback
    1. You MUST NOT under any circumstances force push to the master branch or to long-term release branches.
    2. If a change branch which has been merged into the master branch is found to have a bug in it, the bug fix work MUST be done as a new separate change branch and MUST follow the same workflow as any other change branch.
    3. If a change branch is wrongfully merged into master, or for any other reason the merge must be undone, you MUST undo the merge by reverting the merge commit itself. Effectively creating a new commit that reverses all the relevant changes.

 

  1. Git Best Practices
    1. All commit messages SHOULD follow the Commit Guidelines and format from the official git documentation: https://git-scm.com/book/en/v2/Distributed-Git-Contributing-to-a-Project#_commit_guidelines
    2. You SHOULD never blindly commit all changes with "git commit -a". It is RECOMMENDED you use "git add -i" or "git add -p" to add individual changes to the staging area so you are fully aware of what you are committing.
    3. You SHOULD always use "--force-with-lease" when doing a force push. The regular "--force" option is dangerous and destructive. More information: https://developer.atlassian.com/blog/2015/04/force-with-lease/
    4. You SHOULD understand and be comfortable with rebasing: https://git-scm.com/book/en/v2/Git-Branching-Rebasing
    5. It is RECOMMENDED that you always do "git pull --rebase" instead of "git pull" to avoid unnecessary merge commits. You can make this the default behavior of "git pull" with "git config --global pull.rebase true".
    6. It is RECOMMENDED that all branches be merged using "git merge --no-ff". This makes sure the reference to the original branch is kept in the commits, allows one to revert a merge by reverting a single merge commit, and creates a merge commit to mark the integration of the branch with master.

 

FAQ

Why use Common-Flow instead of Git Flow, and how does it differ?

Common-Flow tries to be a lot less complicated than Git Flow by having fewer types of branches, and simpler rules. Normal day to day development doesn't really change much:

  • You create change branches instead of feature branches, without the need of a "feature/" or "change/" prefix in the branch name.
  • Change branches are typically created from and merged back into "master" instead of "develop".
  • Creating a release is done by simply creating a git tag, typically on the master branch.

 

 

In detail, the main differences between Git Flow and Common-Flow are:

  • There is no "develop" branch, there is only a "master" branch which contains the latest work. In Git Flow the master branch effectively ends up just being a pointer to the latest release, despite the fact that Git Flow includes release tags too. In Common-Flow you just look at the tags to find the latest release.
  • There are no "feature" or "hotfix" branches, there's only "change" branches. Any branch that is not master and introduces changes is a change branch. Change branches also don't have a enforced naming convention, they just have to have a "descriptive name". This makes things simpler and allows more flexibility.
  • Release branches are available, but optional. Instead of enforcing the use of release branches like Git Flow, Common-Flow only recommends the use of release branches when it makes things easier. If creating a new release by tagging "master" works for you, great, do that.

 

 

Why use Common-Flow instead of GitHub Flow, and how does it differ?

Common-Flow is essentially GitHub Flow with the addition of a "Release" concept that uses tags. It also attempts to define how certain common tasks are done, like updating change/feature branches from their source branches for example. This is to help end arguments about how such things are done.

If a deployment/release for you is just getting the latest code in the master branch out, without caring about bumping version numbers or anything, then GitHub Flow is a good fit for you, and you probably don't need the extras of Common-Flow.

However if your deployments/releases have specific version numbers, then Common-Flow gives you a simple set of rules of how to create and manage releases, on top of what GitHub Flow already does.

 

 

What does "descriptive name" mean for change branches?

It means what it sounds like. The name should be descriptive, as in by just reading the name of the branch you should understand what the branch's purpose is and what it does. Here's a few examples:

  • add-2fa-support
  • fix-login-issue
  • remove-sort-by-middle-name-functionality
  • update-font-awesome
  • change-search-behavior
  • improve-pagination-performance
  • tweak-footer-style

Notice how none of these have any prefixes like "feature/" or "hotfix/", they're not needed when branch names are properly descriptive. However there's nothing to say you can't use such prefixes if you want.

You can also add ticket numbers to the branch name if your team/org has that as part of it's process. But it is recommended that ticket numbers are added to the end of the branch name. The ticket number is essentially metadata, so put it at the end and out of the way of humans trying to read the descriptive name from left to right.

 

 

 

How do we release an emergency hotfix when the master branch is broken?

This should ideally never happen, however if it does you can do one of the following:

  • Review why the master branch is broken and revert the changes that caused the issues. Then apply the hotfix and release.
  • Or use a short-term release branch created from the latest release tag instead of the master branch. Apply the hotfix to the release branch, create a release tag on the release branch, and then merge it back into master.

In this situation, it is recommended you try to revert the offending changes that's preventing a new release from master. But if that proves to be a complicated task and you're short on time, a short-term release branch gives you a instant fix to the situation at hand, and let's you resolve the issues with the master branch when you have more time on your hands.

 

 

'job > devops' 카테고리의 다른 글

istio-2  (1) 2025.04.17
istio-1  (0) 2025.04.08
taskfile  (0) 2024.03.10
kubewarden  (0) 2024.01.20
openfunction  (0) 2023.12.01

이번주 주제는 EKS automation 이다.

 

우선 실습환경 배포

# YAML 파일 다운로드
curl -O <https://s3.ap-northeast-2.amazonaws.com/cloudformation.cloudneta.net/K8S/**eks-oneclick6.yaml**>

# CloudFormation 스택 배포
예시) aws cloudformation deploy --template-file **eks-oneclick6.yaml** --stack-name **myeks** --parameter-overrides KeyName=**kp-gasida** SgIngressSshCidr=$(curl -s ipinfo.io/ip)/32  MyIamUserAccessKeyID=**AKIA5...** MyIamUserSecretAccessKey=**'CVNa2...'** ClusterBaseName=**myeks** --region ap-northeast-2

# CloudFormation 스택 배포 완료 후 작업용 EC2 IP 출력
aws cloudformation describe-stacks --stack-name **myeks** --query 'Stacks[*].**Outputs[0]**.OutputValue' --output text

# 작업용 EC2 SSH 접속
ssh -i **~/.ssh/kp-gasida.pem** ec2-user@$(aws cloudformation describe-stacks --stack-name **myeks** --query 'Stacks[*].Outputs[0].OutputValue' --output text)
# default 네임스페이스 적용
kubectl ns default

# (옵션) context 이름 변경
NICK=<각자 자신의 닉네임>
NICK=gasida
kubectl ctx
kubectl config rename-context admin@myeks.ap-northeast-2.eksctl.io $NICK

# ExternalDNS
MyDomain=<자신의 도메인>
echo "export MyDomain=<자신의 도메인>" >> /etc/profile
*MyDomain=gasida.link*
*echo "export MyDomain=gasida.link" >> /etc/profile*
MyDnzHostedZoneId=$(aws route53 list-hosted-zones-by-name --dns-name "${MyDomain}." --query "HostedZones[0].Id" --output text)
echo $MyDomain, $MyDnzHostedZoneId
curl -s -O <https://raw.githubusercontent.com/gasida/PKOS/main/aews/externaldns.yaml>
MyDomain=$MyDomain MyDnzHostedZoneId=$MyDnzHostedZoneId envsubst < externaldns.yaml | kubectl apply -f -

# AWS LB Controller
helm repo add eks <https://aws.github.io/eks-charts>
helm repo update
helm install aws-load-balancer-controller eks/aws-load-balancer-controller -n kube-system --set clusterName=$CLUSTER_NAME \\
  --set serviceAccount.create=false --set serviceAccount.name=aws-load-balancer-controller

# 노드 IP 확인 및 PrivateIP 변수 지정
N1=$(kubectl get node --label-columns=topology.kubernetes.io/zone --selector=topology.kubernetes.io/zone=ap-northeast-2a -o jsonpath={.items[0].status.addresses[0].address})
N2=$(kubectl get node --label-columns=topology.kubernetes.io/zone --selector=topology.kubernetes.io/zone=ap-northeast-2b -o jsonpath={.items[0].status.addresses[0].address})
N3=$(kubectl get node --label-columns=topology.kubernetes.io/zone --selector=topology.kubernetes.io/zone=ap-northeast-2c -o jsonpath={.items[0].status.addresses[0].address})
echo "export N1=$N1" >> /etc/profile
echo "export N2=$N2" >> /etc/profile
echo "export N3=$N3" >> /etc/profile
echo $N1, $N2, $N3

# 노드 보안그룹 ID 확인
NGSGID=$(aws ec2 describe-security-groups --filters Name=group-name,Values='*ng1*' --query "SecurityGroups[*].[GroupId]" --output text)
aws ec2 authorize-security-group-ingress --group-id $NGSGID --protocol '-1' --cidr 192.168.1.0/24

# 워커 노드 SSH 접속
for node in $N1 $N2 $N3; do ssh ec2-user@$node hostname; done
# 사용 리전의 인증서 ARN 확인
CERT_ARN=`aws acm list-certificates --query 'CertificateSummaryList[].CertificateArn[]' --output text`
echo $CERT_ARN

# repo 추가
helm repo add prometheus-community <https://prometheus-community.github.io/helm-charts>

# 파라미터 파일 생성
cat < monitor-values.yaml
**prometheus**:
  prometheusSpec:
    podMonitorSelectorNilUsesHelmValues: false
    serviceMonitorSelectorNilUsesHelmValues: false
    retention: 5d
    retentionSize: "10GiB"

  ingress:
    enabled: true
    ingressClassName: alb
    hosts: 
      - prometheus.$MyDomain
    paths: 
      - /*
    annotations:
      alb.ingress.kubernetes.io/scheme: internet-facing
      alb.ingress.kubernetes.io/target-type: ip
      alb.ingress.kubernetes.io/listen-ports: '[{"HTTPS":443}, {"HTTP":80}]'
      alb.ingress.kubernetes.io/certificate-arn: $CERT_ARN
      alb.ingress.kubernetes.io/success-codes: 200-399
      alb.ingress.kubernetes.io/load-balancer-name: myeks-ingress-alb
      alb.ingress.kubernetes.io/group.name: study
      alb.ingress.kubernetes.io/ssl-redirect: '443'

grafana:
  defaultDashboardsTimezone: Asia/Seoul
  adminPassword: prom-operator

  ingress:
    enabled: true
    ingressClassName: alb
    hosts: 
      - grafana.$MyDomain
    paths: 
      - /*
    annotations:
      alb.ingress.kubernetes.io/scheme: internet-facing
      alb.ingress.kubernetes.io/target-type: ip
      alb.ingress.kubernetes.io/listen-ports: '[{"HTTPS":443}, {"HTTP":80}]'
      alb.ingress.kubernetes.io/certificate-arn: $CERT_ARN
      alb.ingress.kubernetes.io/success-codes: 200-399
      alb.ingress.kubernetes.io/load-balancer-name: myeks-ingress-alb
      alb.ingress.kubernetes.io/group.name: study
      alb.ingress.kubernetes.io/ssl-redirect: '443'

defaultRules:
  create: false
kubeControllerManager:
  enabled: false
kubeEtcd:
  enabled: false
kubeScheduler:
  enabled: false
alertmanager:
  enabled: false
EOT

# 배포
kubectl create ns monitoring
helm install kube-prometheus-stack prometheus-community/kube-prometheus-stack --version 45.27.2 \\
--set prometheus.prometheusSpec.scrapeInterval='15s' --set prometheus.prometheusSpec.evaluationInterval='15s' \\
-f monitor-values.yaml --namespace monitoring

# Metrics-server 배포
kubectl apply -f <https://github.com/kubernetes-sigs/metrics-server/releases/latest/download/components.yaml>

 

ACK(AWS Controller for K8S)

aws 서비스 리소스를 k8s에서 직접 정의하고 사용할수있다. 즉 쿠버네티스 안에서 aws의 리소스를 생성하고 관리할수있다.

다만 모든 aws 서비스를 지원하지는 않는다. 

  • Maintenance Phases 관리 단계 : PREVIEW (테스트 단계, 상용 서비스 비권장) , GENERAL AVAILABILITY (상용 서비스 권장), DEPRECATED, NOT SUPPORTED
  • GA 서비스 : ApiGatewayV2, CloudTrail, DynamoDB, EC2, ECR, EKS, IAM, KMS, Lambda, MemoryDB, RDS, S3, SageMaker…
  • Preview 서비스 : ACM, ElastiCache, EventBridge, MQ, Route 53, SNS, SQS…

 

S3를 쿠버네티스(with helm)에서 배포하려면 다음과 같이 진행한다.

# 서비스명 변수 지정
**export SERVICE=s3**

# helm 차트 다운로드
~~#aws ecr-public get-login-password --region us-east-1 | helm registry login --username AWS --password-stdin public.ecr.aws~~
export RELEASE_VERSION=$(curl -sL <https://api.github.com/repos/aws-controllers-k8s/$SERVICE-controller/releases/latest> | grep '"tag_name":' | cut -d'"' -f4 | cut -c 2-)
helm pull oci://public.ecr.aws/aws-controllers-k8s/$SERVICE-chart --version=$RELEASE_VERSION
tar xzvf $SERVICE-chart-$RELEASE_VERSION.tgz

# helm chart 확인
tree ~/$SERVICE-chart

# ACK S3 Controller 설치
export ACK_SYSTEM_NAMESPACE=ack-system
export AWS_REGION=ap-northeast-2
helm install --create-namespace -n $ACK_SYSTEM_NAMESPACE ack-$SERVICE-controller --set aws.region="$AWS_REGION" ~/$SERVICE-chart

# 설치 확인
helm list --namespace $ACK_SYSTEM_NAMESPACE
kubectl -n ack-system get pods
kubectl get crd | grep $SERVICE
buckets.s3.services.k8s.aws                  2022-04-24T13:24:00Z

kubectl get all -n ack-system
kubectl get-all -n ack-system
kubectl describe sa -n ack-system ack-s3-controller

우선 위와같이 ACK S3 컨트롤러를 설치해주고...

위와같이 IRSA를 설정해준다.

 

이제 S3를 배포해보자.

# [터미널1] 모니터링
watch -d aws s3 ls

# S3 버킷 생성을 위한 설정 파일 생성
export AWS_ACCOUNT_ID=$(aws sts get-caller-identity --query "Account" --output text)
export BUCKET_NAME=my-ack-s3-bucket-$AWS_ACCOUNT_ID

read -r -d '' BUCKET_MANIFEST <<EOF
apiVersion: s3.services.k8s.aws/v1alpha1
kind: Bucket
metadata:
  name: $BUCKET_NAME
spec:
  name: $BUCKET_NAME
EOF

echo "${BUCKET_MANIFEST}" > bucket.yaml
cat bucket.yaml | yh

# S3 버킷 생성
aws s3 ls
**kubectl create -f bucket.yaml**
*bucket.s3.services.k8s.aws/my-ack-s3-bucket-**<my account id>** created*

# S3 버킷 확인
aws s3 ls
**kubectl get buckets**
kubectl describe bucket/$BUCKET_NAME | head -6
Name:         my-ack-s3-bucket-**<my account id>**
Namespace:    default
Labels:       <none>
Annotations:  <none>
API Version:  s3.services.k8s.aws/v1alpha1
Kind:         Bucket

aws s3 ls | grep $BUCKET_NAME
2022-04-24 18:02:07 my-ack-s3-bucket-**<my account id>**

# S3 버킷 업데이트 : 태그 정보 입력
read -r -d '' BUCKET_MANIFEST <<EOF
apiVersion: s3.services.k8s.aws/v1alpha1
kind: Bucket
metadata:
  name: $BUCKET_NAME
spec:
  name: $BUCKET_NAME
  **tagging:
    tagSet:
    - key: myTagKey
      value: myTagValue**
EOF

echo "${BUCKET_MANIFEST}" > bucket.yaml

# S3 버킷 설정 업데이트 실행 : 필요 주석 자동 업뎃 내용이니 무시해도됨!
**kubectl apply -f bucket.yaml**

# S3 버킷 업데이트 확인 
kubectl describe bucket/$BUCKET_NAME | grep Spec: -A5
Spec:
  Name:  my-ack-s3-bucket-**<my account id>**
  Tagging:
    Tag Set:
      Key:    myTagKey
      Value:  myTagValue

# S3 버킷 삭제
**kubectl delete -f bucket.yaml**

# verify the bucket no longer exists
kubectl get bucket/$BUCKET_NAME
aws s3 ls | grep $BUCKET_NAME

위와같이 손쉽게 S3가 배포된것을 확인할수있다.

 

이번엔 배포된 s3를 업데이트해보면..

 

위와같이 태깅이 추가된것을 확인할수있다.

 

EC2 배포는 다음과 같이 진행한다.(S3 와 거의 동일하다.)

# 서비스명 변수 지정 및 helm 차트 다운로드
**export SERVICE=ec2**
export RELEASE_VERSION=$(curl -sL <https://api.github.com/repos/aws-controllers-k8s/$SERVICE-controller/releases/latest> | grep '"tag_name":' | cut -d'"' -f4 | cut -c 2-)
helm pull oci://public.ecr.aws/aws-controllers-k8s/$SERVICE-chart --version=$RELEASE_VERSION
tar xzvf $SERVICE-chart-$RELEASE_VERSION.tgz

# helm chart 확인
tree ~/$SERVICE-chart

# ACK EC2-Controller 설치
export ACK_SYSTEM_NAMESPACE=ack-system
export AWS_REGION=ap-northeast-2
helm install -n $ACK_SYSTEM_NAMESPACE ack-$SERVICE-controller --set aws.region="$AWS_REGION" ~/$SERVICE-chart

# 설치 확인
helm list --namespace $ACK_SYSTEM_NAMESPACE
kubectl -n $ACK_SYSTEM_NAMESPACE get pods -l "app.kubernetes.io/instance=ack-$SERVICE-controller"
**kubectl get crd | grep $SERVICE**
dhcpoptions.ec2.services.k8s.aws             2023-05-30T12:45:13Z
elasticipaddresses.ec2.services.k8s.aws      2023-05-30T12:45:13Z
instances.ec2.services.k8s.aws               2023-05-30T12:45:13Z
internetgateways.ec2.services.k8s.aws        2023-05-30T12:45:13Z
natgateways.ec2.services.k8s.aws             2023-05-30T12:45:13Z
routetables.ec2.services.k8s.aws             2023-05-30T12:45:13Z
securitygroups.ec2.services.k8s.aws          2023-05-30T12:45:13Z
subnets.ec2.services.k8s.aws                 2023-05-30T12:45:13Z
transitgateways.ec2.services.k8s.aws         2023-05-30T12:45:13Z
vpcendpoints.ec2.services.k8s.aws            2023-05-30T12:45:13Z
vpcs.ec2.services.k8s.aws                    2023-05-30T12:45:13Z

 

 

 

 

 Create an iamserviceaccount - AWS IAM role bound to a Kubernetes service account
eksctl create **iamserviceaccount** \\
  --name **ack-**$SERVICE**-controller** \\
  --namespace $ACK_SYSTEM_NAMESPACE \\
  --cluster $CLUSTER_NAME \\
  --attach-policy-arn $(aws iam list-policies --query 'Policies[?PolicyName==`**AmazonEC2FullAccess**`].Arn' --output text) \\
  --**override-existing-serviceaccounts** --approve

# 확인 >> 웹 관리 콘솔에서 CloudFormation Stack >> IAM Role 확인
eksctl get iamserviceaccount --cluster $CLUSTER_NAME

# Inspecting the newly created Kubernetes Service Account, we can see the role we want it to assume in our pod.
kubectl get sa -n $ACK_SYSTEM_NAMESPACE
kubectl describe sa ack-$SERVICE-controller -n $ACK_SYSTEM_NAMESPACE

# Restart ACK service controller deployment using the following commands.
**kubectl -n $**ACK_SYSTEM_NAMESPACE **rollout restart deploy ack-$**SERVICE**-controller-$**SERVICE**-chart**

# IRSA 적용으로 Env, Volume 추가 확인
kubectl describe pod -n $ACK_SYSTEM_NAMESPACE -l k8s-app=$SERVICE-chart
...

 

 

VPC, Subnet 생성 삭제는 다음과같이 진행한다.

# [터미널1] 모니터링
while true; do aws ec2 describe-vpcs --query 'Vpcs[*].{VPCId:VpcId, CidrBlock:CidrBlock}' --output text; echo "-----"; sleep 1; done

# VPC 생성
cat <<EOF > vpc.yaml
apiVersion: **ec2.services.k8s.aws/v1alpha1**
kind: **VPC**
metadata:
  name: **vpc-tutorial-test**
spec:
  cidrBlocks: 
  - **10.0.0.0/16**
  enableDNSSupport: true
  enableDNSHostnames: true
EOF
 
**kubectl apply -f vpc.yaml**
*vpc.ec2.services.k8s.aws/vpc-tutorial-test created*

# VPC 생성 확인
kubectl get vpcs
kubectl describe vpcs
**aws ec2 describe-vpcs --query 'Vpcs[*].{VPCId:VpcId, CidrBlock:CidrBlock}' --output text**

# [터미널1] 모니터링
VPCID=$(kubectl get vpcs vpc-tutorial-test -o jsonpath={.status.vpcID})
while true; do aws ec2 describe-subnets --filters "Name=vpc-id,Values=$VPCID" --query 'Subnets[*].{SubnetId:SubnetId, CidrBlock:CidrBlock}' --output text; echo "-----"; sleep 1 ; done

# 서브넷 생성
VPCID=$(kubectl get vpcs vpc-tutorial-test -o jsonpath={.status.vpcID})

cat <<EOF > subnet.yaml
apiVersion: **ec2**.services.k8s.aws/v1alpha1
kind: **Subnet**
metadata:
  name: **subnet-tutorial-test**
spec:
  cidrBlock: **10.0.0.0/20**
  vpcID: $VPCID
EOF
**kubectl apply -f subnet.yaml**

# 서브넷 생성 확인
kubectl get subnets
kubectl describe subnets
**aws ec2 describe-subnets --filters "Name=vpc-id,Values=$VPCID" --query 'Subnets[*].{SubnetId:SubnetId, CidrBlock:CidrBlock}' --output text**

# 리소스 삭제
kubectl delete -f subnet.yaml && kubectl delete -f vpc.yaml

 

 

 

 

 

이번에는 VPC 워크플로우를 생성해보고자 한다.

 

cat <<EOF > vpc-workflow.yaml
apiVersion: ec2.services.k8s.aws/v1alpha1
kind: **VPC**
metadata:
  name: **tutorial-vpc**
spec:
  cidrBlocks: 
  - **10.0.0.0/16**
  enableDNSSupport: true
  enableDNSHostnames: true
  tags:
    - key: name
      value: vpc-tutorial
---
apiVersion: ec2.services.k8s.aws/v1alpha1
kind: **InternetGateway**
metadata:
  name: **tutorial-igw**
spec:
  **vpcRef**:
    from:
      name: **tutorial-vpc**
---
apiVersion: ec2.services.k8s.aws/v1alpha1
kind: **NATGateway**
metadata:
  name: **tutorial-natgateway1**
spec:
  **subnetRef**:
    from:
      name: tutorial-public-subnet1
  **allocationRef**:
    from:
      name: tutorial-eip1
---
apiVersion: ec2.services.k8s.aws/v1alpha1
kind: **ElasticIPAddress**
metadata:
  name: **tutorial-eip1**
spec:
  tags:
    - key: name
      value: eip-tutorial
---
apiVersion: ec2.services.k8s.aws/v1alpha1
kind: **RouteTable**
metadata:
  name: **tutorial-public-route-table**
spec:
  **vpcRef**:
    from:
      name: tutorial-vpc
  **routes**:
  - destinationCIDRBlock: 0.0.0.0/0
    **gatewayRef**:
      from:
        name: tutorial-igw
---
apiVersion: ec2.services.k8s.aws/v1alpha1
kind: **RouteTable**
metadata:
  name: **tutorial-private-route-table-az1**
spec:
  **vpcRef**:
    from:
      name: tutorial-vpc
  routes:
  - destinationCIDRBlock: 0.0.0.0/0
    **natGatewayRef**:
      from:
        name: tutorial-natgateway1
---
apiVersion: ec2.services.k8s.aws/v1alpha1
kind: **Subnet**
metadata:
  name: tutorial-**public**-subnet1
spec:
  availabilityZone: **ap-northeast-2a**
  cidrBlock: **10.0.0.0/20**
  mapPublicIPOnLaunch: true
  **vpcRef**:
    from:
      name: tutorial-vpc
  **routeTableRefs**:
  - from:
      name: tutorial-public-route-table
---
apiVersion: ec2.services.k8s.aws/v1alpha1
kind: **Subnet**
metadata:
  name: tutorial-**private**-subnet1
spec:
  availabilityZone: **ap-northeast-2a**
  cidrBlock: **10.0.128.0/20**
  **vpcRef**:
    from:
      name: tutorial-vpc
  **routeTableRefs**:
  - from:
      name: tutorial-private-route-table-az1
---
apiVersion: ec2.services.k8s.aws/v1alpha1
kind: **SecurityGroup**
metadata:
  name: tutorial-security-group
spec:
  description: "ack security group"
  name: tutorial-sg
  vpcRef:
     from:
       name: tutorial-vpc
  ingressRules:
    - ipProtocol: tcp
      fromPort: 22
      toPort: 22
      ipRanges:
        - cidrIP: "0.0.0.0/0"
          description: "ingress"
EOF

 

# VPC 환경 생성
**kubectl apply -f vpc-workflow.yaml**

# [터미널1] NATGW 생성 완료 후 tutorial-private-route-table-az1 라우팅 테이블 ID가 확인되고 그후 tutorial-private-subnet1 서브넷ID가 확인됨 > 5분 정도 시간 소요
**watch -d kubectl get routetables,subnet**

# VPC 환경 생성 확인
kubectl describe vpcs
kubectl describe internetgateways
kubectl describe routetables
kubectl describe natgateways
kubectl describe elasticipaddresses
kubectl describe securitygroups

# 배포 도중 2개의 서브넷 상태 정보 비교 해보자
**kubectl describe subnets
...
Status**:
  **Conditions**:
    Last Transition Time:  2023-06-04T02:15:25Z
    Message:               Reference resolution failed
    Reason:                the referenced resource is not synced yet. resource:RouteTable, namespace:default, name:tutorial-private-route-table-az1
    **Status:                Unknown**
    Type:                  ACK.ReferencesResolved
**...
Status**:
  Ack Resource Metadata:
    Arn:                       arn:aws:ec2:ap-northeast-2:911283464785:subnet/subnet-0f5ae09e5d680030a
    Owner Account ID:          911283464785
    Region:                    ap-northeast-2
  Available IP Address Count:  4091
  **Conditions**:
    Last Transition Time:           2023-06-04T02:14:45Z
    **Status:                         True**
    Type:                           ACK.ReferencesResolved
    Last Transition Time:           2023-06-04T02:14:45Z
    Message:                        Resource synced successfully
    Reason:
    **Status:                         True**
    Type:                           ACK.ResourceSynced
...

참고로 앞서 EC2, VPC등의 실습을 진행해야 VPC워크플로우 생성이 된다. CRD를 미리 생성해줘야 하기 때문이다.

또한 리소스 의존성으로 늦게 생성되는것들이 있어서 좀 기다려야 한다.

 

이제 퍼블릭 서브넷에 인스턴스를 생성해보자.

# public 서브넷 ID 확인
PUBSUB1=$(kubectl get subnets **tutorial-public-subnet1** -o jsonpath={.status.subnetID})
echo $PUBSUB1

# 보안그룹 ID 확인
TSG=$(kubectl get securitygroups **tutorial-security-group** -o jsonpath={.status.id})
echo $TSG

# Amazon Linux 2 최신 AMI ID 확인
AL2AMI=$(aws ec2 describe-images --owners **amazon** --filters "Name=name,Values=amzn2-ami-hvm-2.0.*-x86_64-gp2" --query 'Images[0].ImageId' --output text)
echo $AL2AMI

# 각자 자신의 SSH 키페어 이름 변수 지정
MYKEYPAIR=<각자 자신의 SSH 키페어 이름>
MYKEYPAIR=kp-gasida

**# 변수 확인 > 특히 서브넷 ID가 확인되었는지 꼭 확인하자!**
echo $PUBSUB1 , $TSG , $AL2AMI , $MYKEYPAIR

# [터미널1] 모니터링
while true; do aws ec2 describe-instances --query "Reservations[*].Instances[*].{PublicIPAdd:PublicIpAddress,PrivateIPAdd:PrivateIpAddress,InstanceName:Tags[?Key=='Name']|[0].Value,Status:State.Name}" --filters Name=instance-state-name,Values=running --output table; date ; sleep 1 ; done

# public 서브넷에 인스턴스 생성
cat <<EOF > tutorial-bastion-host.yaml
apiVersion: ec2.services.k8s.aws/v1alpha1
kind: **Instance**
metadata:
  name: **tutorial-bastion-host**
spec:
  imageID: $AL2AMI # AL2 AMI ID - ap-northeast-2
  instanceType: **t3.medium**
  subnetID: $PUBSUB1
  securityGroupIDs:
  - $TSG
  keyName: $MYKEYPAIR
  tags:
    - key: producer
      value: ack
EOF
**kubectl apply -f tutorial-bastion-host.yaml**

# 인스턴스 생성 확인
**kubectl** get instance
**kubectl** describe instance
aws ec2 describe-instances --query "Reservations[*].Instances[*].{PublicIPAdd:PublicIpAddress,PrivateIPAdd:PrivateIpAddress,InstanceName:Tags[?Key=='Name']|[0].Value,Status:State.Name}" --filters Name=instance-state-name,Values=running --output table

위와같이 새로운 인스턴스 생성을 확인할수있다.

 

허나 아직 접속이 되지 않을것이다.

아래처럼 egress 규칙을 추가해야한다.

cat <<EOF > modify-sg.yaml
apiVersion: ec2.services.k8s.aws/v1alpha1
kind: SecurityGroup
metadata:
  name: tutorial-security-group
spec:
  description: "ack security group"
  name: tutorial-sg
  vpcRef:
     from:
       name: tutorial-vpc
  ingressRules:
    - ipProtocol: tcp
      fromPort: 22
      toPort: 22
      ipRanges:
        - cidrIP: "0.0.0.0/0"
          description: "ingress"
  **egressRules:
    - ipProtocol: '-1'
      ipRanges:
        - cidrIP: "0.0.0.0/0"
          description: "egress"**
EOF
kubectl apply -f modify-sg.yaml

# 변경 확인 >> 보안그룹에 아웃바운드 규칙 확인
kubectl logs -n $ACK_SYSTEM_NAMESPACE -l k8s-app=ec2-chart -f

 

 

이번에는 프라이빗 서브넷에 인스턴스를 생성해보자.

# private 서브넷 ID 확인 >> NATGW 생성 완료 후 RT/SubnetID가 확인되어 다소 시간 필요함
PRISUB1=$(kubectl get subnets **tutorial-private-subnet1** -o jsonpath={.status.subnetID})
**echo $PRISUB1**

**# 변수 확인 > 특히 private 서브넷 ID가 확인되었는지 꼭 확인하자!**
echo $PRISUB1 , $TSG , $AL2AMI , $MYKEYPAIR

# private 서브넷에 인스턴스 생성
cat <<EOF > tutorial-instance-private.yaml
apiVersion: ec2.services.k8s.aws/v1alpha1
kind: **Instance**
metadata:
  name: **tutorial-instance-private**
spec:
  imageID: $AL2AMI # AL2 AMI ID - ap-northeast-2
  instanceType: **t3.medium**
  subnetID: $PRISUB1
  securityGroupIDs:
  - $TSG
  keyName: $MYKEYPAIR
  tags:
    - key: producer
      value: ack
EOF
**kubectl apply -f tutorial-instance-private.yaml**

# 인스턴스 생성 확인
**kubectl** get instance
**kubectl** describe instance
aws ec2 describe-instances --query "Reservations[*].Instances[*].{PublicIPAdd:PublicIpAddress,PrivateIPAdd:PrivateIpAddress,InstanceName:Tags[?Key=='Name']|[0].Value,Status:State.Name}" --filters Name=instance-state-name,Values=running --output table

위와같이 프라이빗 인스턴스도 생성된것이 확인된다.

 

RDS도 다음과같이 진행하면된다.

# 서비스명 변수 지정 및 helm 차트 다운로드
**export SERVICE=rds**
export RELEASE_VERSION=$(curl -sL <https://api.github.com/repos/aws-controllers-k8s/$SERVICE-controller/releases/latest> | grep '"tag_name":' | cut -d'"' -f4 | cut -c 2-)
helm pull oci://public.ecr.aws/aws-controllers-k8s/$SERVICE-chart --version=$RELEASE_VERSION
tar xzvf $SERVICE-chart-$RELEASE_VERSION.tgz

# helm chart 확인
tree ~/$SERVICE-chart

# ACK EC2-Controller 설치
export ACK_SYSTEM_NAMESPACE=ack-system
export AWS_REGION=ap-northeast-2
helm install -n $ACK_SYSTEM_NAMESPACE ack-$SERVICE-controller --set aws.region="$AWS_REGION" ~/$SERVICE-chart

# 설치 확인
helm list --namespace $ACK_SYSTEM_NAMESPACE
kubectl -n $ACK_SYSTEM_NAMESPACE get pods -l "app.kubernetes.io/instance=ack-$SERVICE-controller"
**kubectl get crd | grep $SERVICE**
# Create an iamserviceaccount - AWS IAM role bound to a Kubernetes service account
eksctl create **iamserviceaccount** \\
  --name **ack-**$SERVICE**-controller** \\
  --namespace $ACK_SYSTEM_NAMESPACE \\
  --cluster $CLUSTER_NAME \\
  --attach-policy-arn $(aws iam list-policies --query 'Policies[?PolicyName==`**AmazonRDSFullAccess**`].Arn' --output text) \\
  --**override-existing-serviceaccounts** --approve

# 확인 >> 웹 관리 콘솔에서 CloudFormation Stack >> IAM Role 확인
eksctl get iamserviceaccount --cluster $CLUSTER_NAME

# Inspecting the newly created Kubernetes Service Account, we can see the role we want it to assume in our pod.
kubectl get sa -n $ACK_SYSTEM_NAMESPACE
kubectl describe sa ack-$SERVICE-controller -n $ACK_SYSTEM_NAMESPACE

# Restart ACK service controller deployment using the following commands.
**kubectl -n $**ACK_SYSTEM_NAMESPACE **rollout restart deploy ack-$**SERVICE**-controller-$**SERVICE**-chart**

# IRSA 적용으로 Env, Volume 추가 확인
kubectl describe pod -n $ACK_SYSTEM_NAMESPACE -l k8s-app=$SERVICE-chart
...
# DB 암호를 위한 secret 생성
RDS_INSTANCE_NAME="<your instance name>"
RDS_INSTANCE_PASSWORD="<your instance password>"
RDS_INSTANCE_NAME=**myrds**
RDS_INSTANCE_PASSWORD=**qwe12345**
kubectl create secret generic "${RDS_INSTANCE_NAME}-password" --from-literal=password="${RDS_INSTANCE_PASSWORD}"

# 확인
kubectl get secret $RDS_INSTANCE_NAME-password

# [터미널1] 모니터링
RDS_INSTANCE_NAME=myrds
watch -d "kubectl describe dbinstance "${RDS_INSTANCE_NAME}" | grep 'Db Instance Status'"

# RDS 배포 생성 : 15분 이내 시간 소요 >> 보안그룹, 서브넷 등 필요한 옵션들은 추가해서 설정해보자!
cat <<EOF > rds-mariadb.yaml
apiVersion: rds.services.k8s.aws/v1alpha1
kind: DBInstance
metadata:
  name: "${RDS_INSTANCE_NAME}"
spec:
  allocatedStorage: 20
  dbInstanceClass: **db.t4g.micro**
  dbInstanceIdentifier: "${RDS_INSTANCE_NAME}"
  engine: **mariadb**
  engineVersion: "**10.6**"
  masterUsername: "**admin**"
  masterUserPassword:
    namespace: default
    name: "${RDS_INSTANCE_NAME}-password"
    key: password
EOF
kubectl apply -f rds-mariadb.yaml

# 생성 확인
kubectl get dbinstances  ${RDS_INSTANCE_NAME}
kubectl describe dbinstance "${RDS_INSTANCE_NAME}"
aws rds describe-db-instances --db-instance-identifier $RDS_INSTANCE_NAME | jq

kubectl describe dbinstance "${RDS_INSTANCE_NAME}" | grep 'Db Instance Status'
  Db Instance Status:         **creating**

kubectl describe dbinstance "${RDS_INSTANCE_NAME}" | grep 'Db Instance Status'
  Db Instance Status:         **backing-up**

kubectl describe dbinstance "${RDS_INSTANCE_NAME}" | grep 'Db Instance Status'
  Db Instance Status:         **available**

# 생성 완료 대기 : for 지정 상태가 완료되면 정상 종료됨
****kubectl wait dbinstances ${RDS_INSTANCE_NAME} **--for=condition=ACK.ResourceSynced** --timeout=15m
*dbinstance.rds.services.k8s.aws/myrds condition met*

 

이번엔 Flux에 대해 스터디를 진행한다.

설치 방법은 다음과 같다.

# Flux CLI 설치
**curl -s <https://fluxcd.io/install.sh> | sudo bash**
. <(flux completion bash)

# 버전 확인
**flux --version**
flux version 2.0.0-rc.5

# 자신의 Github 토큰과 유저이름 변수 지정
export GITHUB_TOKEN=
export GITHUB_USER=
export GITHUB_TOKEN=ghp_###
export GITHUB_USER=gasida

# Bootstrap
## Creates a git repository **fleet-infra** on your GitHub account.
## Adds Flux component manifests to the repository.
## **Deploys** Flux Components to your Kubernetes Cluster.
## Configures Flux components to track the path /clusters/my-cluster/ in the repository.
**flux bootstrap github \\
  --owner=$GITHUB_USER \\
  --repository=fleet-infra \\
  --branch=main \\
  --path=./clusters/my-cluster \\
  --personal**

# 설치 확인
kubectl get pods -n flux-system
kubectl get-all -n flux-system
kubectl get crd | grep fluxc
**kubectl get gitrepository -n flux-system**
NAME          URL                                       AGE    READY   STATUS
flux-system   ssh://git@github.com/gasida/fleet-infra   4m6s   True    stored artifact for revision 'main@sha1:4172548433a9f4e089758c3512b0b24d289e9702'

위와같이 설치 진행

 

Gitops 도구 설치는 아래와같이 진행한다.

# gitops 도구 설치
curl --silent --location "<https://github.com/weaveworks/weave-gitops/releases/download/v0.24.0/gitops-$(uname)-$>(uname -m).tar.gz" | tar xz -C /tmp
sudo mv /tmp/gitops /usr/local/bin
gitops version

# flux 대시보드 설치
PASSWORD="password"
gitops create dashboard ww-gitops --password=$PASSWORD

# 확인
flux -n flux-system get helmrelease
kubectl -n flux-system get pod,svc

이후 ingress 를 설정해준다.

CERT_ARN=`aws acm list-certificates --query 'CertificateSummaryList[].CertificateArn[]' --output text`
echo $CERT_ARN

# Ingress 설정
cat <<EOT > gitops-ingress.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: gitops-ingress
  annotations:
    alb.ingress.kubernetes.io/certificate-arn: $CERT_ARN
    alb.ingress.kubernetes.io/group.name: **study**
    alb.ingress.kubernetes.io/listen-ports: '[{"HTTPS":443}, {"HTTP":80}]'
    alb.ingress.kubernetes.io/load-balancer-name: myeks-ingress-alb
    alb.ingress.kubernetes.io/scheme: internet-facing
    alb.ingress.kubernetes.io/ssl-redirect: "443"
    alb.ingress.kubernetes.io/success-codes: 200-399
    alb.ingress.kubernetes.io/target-type: ip
spec:
  ingressClassName: alb
  rules:
  - host: gitops.$MyDomain
    http:
      paths:
      - backend:
          service:
            name: ww-gitops-weave-gitops
            port:
              number: 9001
        path: /
        pathType: Prefix
EOT
kubectl apply -f gitops-ingress.yaml -n flux-system

# 배포 확인
kubectl get ingress -n flux-system

# GitOps 접속 정보 확인 >> 웹 접속 후 정보 확인
echo -e "GitOps Web <https://gitops.$MyDomain>"

헬로월드 예제 소스 코드이다.

# 소스 생성 : 유형 - git, helm, oci, bucket
# flux create source {소스 유형}
# 악분(최성욱)님이 준비한 repo로 git 소스 생성
GITURL="<https://github.com/sungwook-practice/fluxcd-test.git>"
**flux create source git nginx-example1 --url=$GITURL --branch=main --interval=30s**

# 소스 확인
flux get sources git
kubectl -n flux-system get gitrepositories

 

이후 flux 애플리케이션 생성은 다음과같이 진행한다.

# [터미널] 모니터링
watch -d kubectl get pod,svc nginx-example1

# flux 애플리케이션 생성 : nginx-example1
flux create **kustomization** **nginx-example1** --target-namespace=default --interval=1m --source=nginx-example1 --path="**./nginx**" --health-check-timeout=2m

# 확인
kubectl get pod,svc nginx-example1
kubectl get kustomizations -n flux-system
flux get kustomizations

 

 

 

 

 

 

설치 완료 및 확인

 

 

'job > eks' 카테고리의 다른 글

eks 6주차  (0) 2023.05.31
eks 5주차  (0) 2023.05.23
4주차 eks 스터디  (0) 2023.05.16
3주차 eks 스터디  (0) 2023.05.13
eks 2주차  (0) 2023.05.02

6주차 주제는 eks security이다.

 

사전준비는 아래와 같다.

# YAML 파일 다운로드
curl -O <https://s3.ap-northeast-2.amazonaws.com/cloudformation.cloudneta.net/K8S/**eks-oneclick5.yaml**>

# CloudFormation 스택 배포
예시) aws cloudformation deploy --template-file **eks-oneclick5.yaml** --stack-name **myeks** --parameter-overrides KeyName=**kp-gasida** SgIngressSshCidr=$(curl -s ipinfo.io/ip)/32  MyIamUserAccessKeyID=**AKIA5...** MyIamUserSecretAccessKey=**'CVNa2...'** ClusterBaseName=**myeks** --region ap-northeast-2

# CloudFormation 스택 배포 완료 후 작업용 EC2 IP 출력
aws cloudformation describe-stacks --stack-name **myeks** --query 'Stacks[*].**Outputs[0]**.OutputValue' --output text

# 작업용 EC2 SSH 접속
ssh -i **~/.ssh/kp-gasida.pem** ec2-user@$(aws cloudformation describe-stacks --stack-name **myeks** --query 'Stacks[*].Outputs[0].OutputValue' --output text)
  • 기본 설정
# default 네임스페이스 적용
kubectl ns default

# (옵션) context 이름 변경
NICK=<각자 자신의 닉네임>
NICK=gasida
kubectl ctx
kubectl config rename-context admin@myeks.ap-northeast-2.eksctl.io $NICK

# ExternalDNS
MyDomain=<자신의 도메인>
echo "export MyDomain=<자신의 도메인>" >> /etc/profile
*MyDomain=gasida.link*
*echo "export MyDomain=gasida.link" >> /etc/profile*
MyDnzHostedZoneId=$(aws route53 list-hosted-zones-by-name --dns-name "${MyDomain}." --query "HostedZones[0].Id" --output text)
echo $MyDomain, $MyDnzHostedZoneId
curl -s -O <https://raw.githubusercontent.com/gasida/PKOS/main/aews/externaldns.yaml>
MyDomain=$MyDomain MyDnzHostedZoneId=$MyDnzHostedZoneId envsubst < externaldns.yaml | kubectl apply -f -

# AWS LB Controller
helm repo add eks <https://aws.github.io/eks-charts>
helm repo update
helm install aws-load-balancer-controller eks/aws-load-balancer-controller -n kube-system --set clusterName=$CLUSTER_NAME \\
  --set serviceAccount.create=false --set serviceAccount.name=aws-load-balancer-controller

# 노드 IP 확인 및 PrivateIP 변수 지정
N1=$(kubectl get node --label-columns=topology.kubernetes.io/zone --selector=topology.kubernetes.io/zone=ap-northeast-2a -o jsonpath={.items[0].status.addresses[0].address})
N2=$(kubectl get node --label-columns=topology.kubernetes.io/zone --selector=topology.kubernetes.io/zone=ap-northeast-2b -o jsonpath={.items[0].status.addresses[0].address})
N3=$(kubectl get node --label-columns=topology.kubernetes.io/zone --selector=topology.kubernetes.io/zone=ap-northeast-2c -o jsonpath={.items[0].status.addresses[0].address})
echo "export N1=$N1" >> /etc/profile
echo "export N2=$N2" >> /etc/profile
echo "export N3=$N3" >> /etc/profile
echo $N1, $N2, $N3

# 노드 보안그룹 ID 확인
NGSGID=$(aws ec2 describe-security-groups --filters Name=group-name,Values='*ng1*' --query "SecurityGroups[*].[GroupId]" --output text)
aws ec2 authorize-security-group-ingress --group-id $NGSGID --protocol '-1' --cidr 192.168.1.0/24

# 워커 노드 SSH 접속
for node in $N1 $N2 $N3; do ssh ec2-user@$node hostname; done
  • 프로메테우스 & 그라파나(admin / prom-operator) 설치 : 대시보드 추천 15757 17900 15172
# 사용 리전의 인증서 ARN 확인
CERT_ARN=`aws acm list-certificates --query 'CertificateSummaryList[].CertificateArn[]' --output text`
echo $CERT_ARN

# repo 추가
helm repo add prometheus-community <https://prometheus-community.github.io/helm-charts>

# 파라미터 파일 생성
cat < monitor-values.yaml
**prometheus**:
  prometheusSpec:
    podMonitorSelectorNilUsesHelmValues: false
    serviceMonitorSelectorNilUsesHelmValues: false
    retention: 5d
    retentionSize: "10GiB"

  ingress:
    enabled: true
    ingressClassName: alb
    hosts: 
      - prometheus.$MyDomain
    paths: 
      - /*
    annotations:
      alb.ingress.kubernetes.io/scheme: internet-facing
      alb.ingress.kubernetes.io/target-type: ip
      alb.ingress.kubernetes.io/listen-ports: '[{"HTTPS":443}, {"HTTP":80}]'
      alb.ingress.kubernetes.io/certificate-arn: $CERT_ARN
      alb.ingress.kubernetes.io/success-codes: 200-399
      alb.ingress.kubernetes.io/load-balancer-name: myeks-ingress-alb
      alb.ingress.kubernetes.io/group.name: study
      alb.ingress.kubernetes.io/ssl-redirect: '443'

grafana:
  defaultDashboardsTimezone: Asia/Seoul
  adminPassword: prom-operator

  ingress:
    enabled: true
    ingressClassName: alb
    hosts: 
      - grafana.$MyDomain
    paths: 
      - /*
    annotations:
      alb.ingress.kubernetes.io/scheme: internet-facing
      alb.ingress.kubernetes.io/target-type: ip
      alb.ingress.kubernetes.io/listen-ports: '[{"HTTPS":443}, {"HTTP":80}]'
      alb.ingress.kubernetes.io/certificate-arn: $CERT_ARN
      alb.ingress.kubernetes.io/success-codes: 200-399
      alb.ingress.kubernetes.io/load-balancer-name: myeks-ingress-alb
      alb.ingress.kubernetes.io/group.name: study
      alb.ingress.kubernetes.io/ssl-redirect: '443'

defaultRules:
  create: false
kubeControllerManager:
  enabled: false
kubeEtcd:
  enabled: false
kubeScheduler:
  enabled: false
alertmanager:
  enabled: false
EOT

# 배포
kubectl create ns monitoring
helm install kube-prometheus-stack prometheus-community/kube-prometheus-stack --version 45.27.2 \\
--set prometheus.prometheusSpec.scrapeInterval='15s' --set prometheus.prometheusSpec.evaluationInterval='15s' \\
-f monitor-values.yaml --namespace monitoring

# Metrics-server 배포
kubectl apply -f <https://github.com/kubernetes-sigs/metrics-server/releases/latest/download/components.yaml>

 

EKS 인증 / 인가 설명에 앞서 AWS IAM과 K8S 인증 관련 지식이 있는 상태에서 실습이 시작된다.

 

EKS는 인증은 AWS IAM에서 진행하고 인가는 K8S RBAC을 사용한다.

RBAC 관련 롤을 확인하는것은 아래와 같다. 

# 설치
**kubectl krew install access-matrix rbac-tool rbac-view rolesum**

# Show an RBAC access matrix for server resources
**kubectl access-matrix** # Review access to cluster-scoped resources
kubectl access-matrix --namespace default # Review access to namespaced resources in 'default'

# RBAC Lookup by subject (user/group/serviceaccount) name
kubectl rbac-tool lookup
**kubectl rbac-tool lookup system:masters**
  SUBJECT        | SUBJECT TYPE | SCOPE       | NAMESPACE | ROLE
+----------------+--------------+-------------+-----------+---------------+
  system:masters | Group        | ClusterRole |           | cluster-admin

kubectl rbac-tool lookup system:nodes # eks:node-bootstrapper
kubectl rbac-tool lookup system:bootstrappers # eks:node-bootstrapper
**kubectl describe ClusterRole eks:node-bootstrapper**

# RBAC List Policy Rules For subject (user/group/serviceaccount) name
kubectl rbac-tool policy-rules
kubectl rbac-tool policy-rules -e '^system:.*'

# Generate ClusterRole with all available permissions from the target cluster
kubectl rbac-tool show

# Shows the subject for the current context with which one authenticates with the cluster
**kubectl rbac-tool whoami**
{Username: "kubernetes-admin",
 UID:      "aws-iam-authenticator:911283.....:AIDA5ILF2FJ......",
 Groups:   ["system:masters",
            "system:authenticated"],
 Extra:    {**accessKeyId**:  ["AKIA5ILF2FJ....."],
            arn:          ["arn:aws:iam::911283....:user/admin"],
            canonicalArn: ["arn:aws:iam::911283....:user/admin"],
            principalId:  ["AIDA5ILF2FJ....."],
            sessionName:  [""]}}

# Summarize RBAC roles for subjects : ServiceAccount(default), User, Group
kubectl rolesum -h
**kubectl rolesum aws-node -n kube-system**
kubectl rolesum -k User system:kube-proxy
**kubectl rolesum -k Group system:masters**

# [터미널1] A tool to visualize your RBAC permissions
**kubectl rbac-view**
INFO[0000] Getting K8s client
INFO[0000] serving RBAC View and <http://localhost:8800>

## 이후 해당 작업용PC 공인 IP:8800 웹 접속
echo -e "RBAC View Web <http://$>(curl -s ipinfo.io/ip):8800"

 

https://youtu.be/bksogA-WXv8

참고영상

 

kubectl로 AWS IAM 을 통해 위와같이 STS 토큰을 받아온다. 해당 토큰을 디코딩하면 GetCallerIdentity 나 버전 expiredate등이 포함돼있다.

 

 

해당 토큰을 EKS api에 가 받으면 Webhook token authenticator에 요청한다.

이후 AWS STS(AWS IAM)이 응답을 해주고(인증이 완료되면) User/Role에 대한 ARN을 반환한다.

 

이후 K8S에서 RBAC 인가를 처리한다. 

# Webhook api 리소스 확인 
**kubectl api-resources | grep Webhook**
mutatingwebhookconfigurations                  admissionregistration.k8s.io/v1        false        MutatingWebhookConfiguration
**validatingwebhookconfigurations**                admissionregistration.k8s.io/v1        false        ValidatingWebhookConfiguration

# validatingwebhookconfigurations 리소스 확인
**kubectl get validatingwebhookconfigurations**
NAME                                        WEBHOOKS   AGE
eks-aws-auth-configmap-validation-webhook   1          50m
vpc-resource-validating-webhook             2          50m
aws-load-balancer-webhook                   3          8m27s

**kubectl get validatingwebhookconfigurations eks-aws-auth-configmap-validation-webhook -o yaml | kubectl neat | yh**

# aws-auth 컨피그맵 확인
**kubectl get cm -n kube-system aws-auth -o yaml | kubectl neat | yh**
apiVersion: v1
kind: ConfigMap
metadata: 
  name: aws-auth
  namespace: kube-system
data: 
  **mapRoles**: |
    - groups:
      - system:bootstrappers
      - system:nodes
      rolearn: arn:aws:iam::91128.....:role/eksctl-myeks-nodegroup-ng1-NodeInstanceRole-1OS1WSTV0YB9X
      username: system:node:{{EC2PrivateDNSName}}
*#---<아래 생략(추정), ARN은 EKS를 설치한 IAM User , 여기 있었을경우 만약 실수로 삭제 시 복구가 가능했을까?---
  **mapUsers**: |
    - groups:
      - **system:masters**
      userarn: arn:aws:iam::111122223333:user/**admin**
      username: **kubernetes-admin***

# EKS 설치한 IAM User 정보 >> system:authenticated는 어떤 방식으로 추가가 되었는지 궁금???
**kubectl rbac-tool whoami**
{Username: "kubernetes-admin",
 UID:      "aws-iam-authenticator:9112834...:AIDA5ILF2FJIR2.....",
 Groups:   ["system:masters",
            "system:authenticated"],
...

# system:masters , system:authenticated 그룹의 정보 확인
kubectl rbac-tool lookup system:masters
kubectl rbac-tool lookup system:authenticated
kubectl rolesum -k Group system:masters
kubectl rolesum -k Group system:authenticated

# system:masters 그룹이 사용 가능한 클러스터 롤 확인 : cluster-admin
**kubectl describe clusterrolebindings.rbac.authorization.k8s.io cluster-admin**
Name:         cluster-admin
Labels:       kubernetes.io/bootstrapping=rbac-defaults
Annotations:  rbac.authorization.kubernetes.io/autoupdate: true
Role:
  Kind:  ClusterRole
  Name:  **cluster-admin**
Subjects:
  Kind   Name            Namespace
  ----   ----            ---------
  Group  **system:masters**

# cluster-admin 의 PolicyRule 확인 : 모든 리소스  사용 가능!
**kubectl describe clusterrole cluster-admin**
Name:         cluster-admin
Labels:       kubernetes.io/bootstrapping=rbac-defaults
Annotations:  rbac.authorization.kubernetes.io/autoupdate: true
PolicyRule:
  **Resources**  **Non-Resource URLs**  Resource Names  Verbs
  ---------  -----------------  --------------  -----
  ***.***        []                 []              [*****]
             [*****]                []              [*]

# system:authenticated 그룹이 사용 가능한 클러스터 롤 확인
kubectl describe ClusterRole **system:discovery**
kubectl describe ClusterRole **system:public-info-viewer**
kubectl describe ClusterRole **system:basic-user**
kubectl describe ClusterRole **eks:podsecuritypolicy:privileged**

 

위와같이 인증/인가 흐름을 알아보았다.

 

이번엔 실제 권한을 할당하는 실습을 진행해보겠다.

# testuser 사용자 생성
aws iam create-user --user-name **testuser**

# 사용자에게 프로그래밍 방식 액세스 권한 부여
aws iam create-access-key --user-name **testuser**
{
    "AccessKey": {
        "UserName": "testuser",
        "**AccessKeyId**": "AKIA5ILF2##",
        "Status": "Active",
        "**SecretAccessKey**": "TxhhwsU8##",
        "CreateDate": "2023-05-23T07:40:09+00:00"
    }
}
# testuser 사용자에 정책을 추가
aws iam attach-user-policy --policy-arn arn:aws:iam::aws:policy/**AdministratorAccess** --user-name **testuser**

# get-caller-identity 확인
aws sts get-caller-identity --query Arn
"arn:aws:iam::911283464785:user/admin"

# EC2 IP 확인 : myeks-bastion-EC2-2 PublicIPAdd 확인
**aws ec2 describe-instances --query "Reservations[*].Instances[*].{PublicIPAdd:PublicIpAddress,PrivateIPAdd:PrivateIpAddress,InstanceName:Tags[?Key=='Name']|[0].Value,Status:State.Name}" --filters Name=instance-state-name,Values=running --output table**

위와같이 사용자 생성하고 키&정책 부여해준다.

# get-caller-identity 확인 >> 왜 안될까요?
**aws sts get-caller-identity --query Arn**

# testuser 자격증명 설정
**aws configure**
AWS Access Key ID [None]: *AKIA5ILF2F...*
AWS Secret Access Key [None]: *ePpXdhA3cP....*
Default region name [None]: ***ap-northeast-2***

# get-caller-identity 확인
**aws sts get-caller-identity --query Arn**
"arn:aws:iam::911283464785:user/**testuser**"

# kubectl 시도 >> testuser도 **AdministratorAccess** 권한을 가지고 있는데, 실패 이유는?
**kubectl get node -v6**
ls ~/.kube

그다음 위와같이 aws계정 연동을 진행해주고

# 방안1 : eksctl 사용 >> iamidentitymapping 실행 시 aws-auth 컨피그맵 작성해줌
# Creates a mapping from IAM role or user to Kubernetes user and groups
eksctl create iamidentitymapping --cluster $**CLUSTER_NAME** --username **testuser** --group **system:masters** --arn arn:aws:iam::$ACCOUNT_ID:user/**testuser**

# 확인
**kubectl get cm -n kube-system aws-auth -o yaml | kubectl neat | yh**
...

~~# 방안2 : 아래 edit로 mapUsers 내용 직접 추가!
**kubectl edit cm -n kube-system aws-auth**
---
apiVersion: v1
data: 
  mapRoles: |
    - groups:
      - system:bootstrappers
      - system:nodes
      rolearn: arn:aws:iam::911283464785:role/eksctl-myeks-nodegroup-ng1-NodeInstanceRole-LHQ7DWHQQRZJ
      username: system:node:{{EC2PrivateDNSName}}
  **mapUsers: |
    - groups:
      - system:masters
      userarn: arn:aws:iam::911283464785:user/testuser
      username: testuser**
...~~

# 확인 : 기존에 있는 **role**/eksctl-myeks-nodegroup-ng1-NodeInstanceRole-YYYYY 는 어떤 역할/동작을 하는 걸까요?
**eksctl get iamidentitymapping --cluster $CLUSTER_NAME**
ARN											USERNAME				GROUPS					ACCOUNT
arn:aws:iam::911283464785:**role**/eksctl-myeks-nodegroup-ng1-NodeInstanceRole-LHQ7DWHQQRZJ	system:node:{{EC2PrivateDNSName}}	system:bootstrappers,system:nodes	
arn:aws:iam::911283464785:**user**/testuser							testuser				system:masters

위와같이 새로 생성한 testuesr 계정에다가 EKS 관리자 권한을ㄹ 부여한다.

 

권한이 부여됨을 확인할 수 있다.

 

edit를 통해 mapUsers를 직접 수정해줄수도 있다. 

# 방안2 : 아래 edit로 mapUsers 내용 직접 수정 **system:authenticated**
~~~~**kubectl edit cm -n kube-system aws-auth**
...

# 확인
eksctl get iamidentitymapping --cluster $CLUSTER_NAME

 

Testuser를 삭제하는 방법은 다음과 같다.

# testuser IAM 맵핑 삭제
eksctl **delete** iamidentitymapping --cluster $CLUSTER_NAME --arn  arn:aws:iam::$ACCOUNT_ID:user/testuser

# Get IAM identity mapping(s)
eksctl get iamidentitymapping --cluster $CLUSTER_NAME
kubectl get cm -n kube-system aws-auth -o yaml | yh

지금까지는 사용자/애플리케이션 > k8s 이용시에 대한 인증/인가에 대해 알아보았다.

 

지금부터는 k8s파드 > aws 서비스 이용시에 대해 알아보도록 한다. IRSA이다.

요점은 OIDC를 통해 인증/인가를 진행한다.

 

 

실습을 진행해보자.

# 파드1 생성
cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: Pod
metadata:
  name: eks-iam-test1
spec:
  containers:
    - name: my-aws-cli
      image: amazon/aws-cli:latest
      args: ['s3', 'ls']
  restartPolicy: Never
  **automountServiceAccountToken: false**
EOF

# 확인
kubectl get pod
kubectl describe pod

# 로그 확인
kubectl logs eks-iam-test1

# 파드1 삭제
kubectl delete pod eks-iam-test1

automountServiceAccountToken: false

위와같이 자동 발급을 false 해놓으면 

 

 

위와같은 결과가 나온다.

 

두번째 실습이다.

 

# 파드2 생성
cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: Pod
metadata:
  name: eks-iam-test2
spec:
  containers:
    - name: my-aws-cli
      image: amazon/aws-cli:latest
      command: ['sleep', '36000']
  restartPolicy: Never
EOF

# 확인
kubectl get pod
kubectl describe pod

# aws 서비스 사용 시도
kubectl exec -it eks-iam-test2 -- aws s3 ls

# 서비스 어카운트 토큰 확인
SA_TOKEN=$(kubectl exec -it eks-iam-test2 -- cat /var/run/secrets/kubernetes.io/serviceaccount/token)
echo $SA_TOKEN

# jwt 혹은 아래 JWT 웹 사이트 이용
jwt decode $SA_TOKEN --json --iso8601
...

#헤더
{
  "alg": "RS256",
  "kid": "1a8fcaee12b3a8f191327b5e9b997487ae93baab"
}

# 페이로드 : OAuth2에서 쓰이는 aud, exp 속성 확인! > projectedServiceAccountToken 기능으로 토큰에 audience,exp 항목을 덧붙힘
## iss 속성 : EKS OpenID Connect Provider(EKS IdP) 주소 > 이 EKS IdP를 통해 쿠버네티스가 발급한 토큰이 유요한지 검증
{
  "aud": [
    "https://kubernetes.default.svc"  # 해당 주소는 k8s api의 ClusterIP 서비스 주소 도메인명, kubectl get svc kubernetes
  ],
  "exp": 1716619848,
  "iat": 1685083848,
  "iss": "https://oidc.eks.ap-northeast-2.amazonaws.com/id/F6A7523462E8E6CDADEE5D41DF2E71F6",
  "kubernetes.io": {
    "namespace": "default",
    "pod": {
      "name": "eks-iam-test2",
      "uid": "10dcccc8-a16c-4fc7-9663-13c9448e107a"
    },
    "serviceaccount": {
      "name": "default",
      "uid": "acb6c60d-0c5f-4583-b83b-1b629b0bdd87"
    },
    "warnafter": 1685087455
  },
  "nbf": 1685083848,
  "sub": "system:serviceaccount:default:default"
}

# 파드2 삭제
kubectl delete pod eks-iam-test2

 

 

위에서 생성한 파드도 마찬가지로 권한이 없는 상태이다.

 

위의 토큰을 디코딩해보면 서비스 어카운트, 파드네임등의 정보가 담겨있다. iss(oidc 제공자 정보)도 확인된다.

 

마지막 실습이다.

# Create an iamserviceaccount - AWS IAM role bound to a Kubernetes service account
eksctl create **iamserviceaccount** \\
  --name **my-sa** \\
  --namespace **default** \\
  --cluster $CLUSTER_NAME \\
  --approve \\
  --attach-policy-arn $(aws iam list-policies --query 'Policies[?PolicyName==`AmazonS3ReadOnlyAccess`].Arn' --output text)

# 확인 >> 웹 관리 콘솔에서 CloudFormation Stack >> IAM Role 확인
# aws-load-balancer-controller IRSA는 어떤 동작을 수행할 것 인지 생각해보자!
eksctl get iamserviceaccount --cluster $CLUSTER_NAME

# Inspecting the newly created Kubernetes Service Account, we can see the role we want it to assume in our pod.
**kubectl get sa**
**kubectl describe sa my-sa**
Name:                my-sa
Namespace:           default
Labels:              app.kubernetes.io/managed-by=eksctl
Annotations:         **eks.amazonaws.com/role-arn: arn:aws:iam::911283464785:role/eksctl-myeks-addon-iamserviceaccount-default-Role1-1MJUYW59O6QGH**
Image pull secrets:  <none>
Mountable secrets:   <none>
Tokens:              <none>
Events:              <none>

irsa를 통해 아마존S3 리드온리 정책을 만들어준다.

그리고 해당 어카운트를 사용하는 파드를 생성한다.

# 파드3번 생성
cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: Pod
metadata:
  name: eks-iam-test3
spec:
  **serviceAccountName: my-sa**
  containers:
    - name: my-aws-cli
      image: amazon/aws-cli:latest
      command: ['sleep', '36000']
  restartPolicy: Never
EOF

# 해당 SA를 파드가 사용 시 mutatingwebhook으로 Env,Volume 추가함
kubectl get mutatingwebhookconfigurations pod-identity-webhook -o yaml | kubectl neat | yh

**# 파드 생성 yaml에 없던 내용이 추가됨!!!!!**
# **Pod Identity Webhook**은 **mutating** webhook을 통해 아래 **Env 내용**과 **1개의 볼륨**을 추가함
kubectl get pod eks-iam-test3
**kubectl describe pod eks-iam-test3**
...
**Environment**:
      AWS_STS_REGIONAL_ENDPOINTS:   regional
      AWS_DEFAULT_REGION:           ap-northeast-2
      AWS_REGION:                   ap-northeast-2
      AWS_ROLE_ARN:                 arn:aws:iam::911283464785:role/eksctl-myeks-addon-iamserviceaccount-default-Role1-GE2DZKJYWCEN
      AWS_WEB_IDENTITY_TOKEN_FILE:  /var/run/secrets/eks.amazonaws.com/serviceaccount/token
    Mounts:
      /var/run/secrets/eks.amazonaws.com/serviceaccount from aws-iam-token (ro)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-69rh8 (ro)
...
**Volumes:**
  **aws-iam-token**:
    Type:                    **Projected** (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  86400
  kube-api-access-sn467:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    ConfigMapOptional:       <nil>
    DownwardAPI:             true
...

# 파드에서 aws cli 사용 확인
eksctl get iamserviceaccount --cluster $CLUSTER_NAME
**kubectl exec -it eks-iam-test3 -- aws sts get-caller-identity --query Arn**
"arn:aws:sts::911283464785:assumed-role/eksctl-myeks-addon-iamserviceaccount-default-Role1-GE2DZKJYWCEN/botocore-session-1685179271"

# 되는 것고 안되는 것은 왜그런가?
kubectl exec -it eks-iam-test3 -- **aws s3 ls**
kubectl exec -it eks-iam-test3 -- **aws ec2 describe-instances --region ap-northeast-2**
kubectl exec -it eks-iam-test3 -- **aws ec2 describe-vpcs --region ap-northeast-2**

위에서 보면 Env가 추가된것이 확인이 된다.

 

최종적으로 아까 계속 S3 조회 실패하던것이 이번에는 성공한것을 확인할수있다.

 

우리가 준 권한은 S3리드온리이기 때문에 ec2 리드등 다른 정보는 볼수없다.

 

 

 

'job > eks' 카테고리의 다른 글

7주차 eks 스터디  (0) 2023.06.08
eks 5주차  (0) 2023.05.23
4주차 eks 스터디  (0) 2023.05.16
3주차 eks 스터디  (0) 2023.05.13
eks 2주차  (0) 2023.05.02

+ Recent posts