1장. 깃옵스와 쿠버네티스

Gitops 4대원칙

  • Declarative (선언적 구성)
    • “무엇을 원하는지”만 정의, 실행 방법은 명시하지 않음.
    • 예: “컨테이너 3개 사용”을 선언하면, 시스템이 자동으로 상태를 맞춤.
  • Versioned & Immutable (버전 관리 및 불변성)
    • Git과 같은 버전 제어 시스템을 단일 진실 원천(single source of truth)으로 사용.
    • 모든 변경 이력은 추적 가능하고 불변함.
  • Pulled Automatically (자동화된 반영)
    • Git 저장소의 변경 사항이 자동으로 시스템에 반영되어야 함.
    • 수동 배포 작업이 없어야 함.
  • Continuously Reconciled (지속적인 일치 및 조정)
    • 현재 상태와 선언된 상태를 지속적으로 비교하여 자동으로 일치시킴.
    • 이를 폐쇄 루프(closed loop) 제어라 함.

 

kind k8s배포

#기존배포한 k8s를 통해 진행
(⎈|kind-myk8s:default) zosys@4:~$ helm repo add geek-cookbook https://geek-cookbook.github.io/charts/
"geek-cookbook" has been added to your repositories
(⎈|kind-myk8s:default) zosys@4:~$ helm install kube-ops-view geek-cookbook/kube-ops-view --version 1.2.2 --set service.main.type=NodePort,service.main.ports.http.nodePort=30001 --set env.TZ="Asia/Seoul" --namespace kube-system
NAME: kube-ops-view
LAST DEPLOYED: Mon Nov  3 15:21:48 2025
NAMESPACE: kube-system
STATUS: deployed
REVISION: 1
TEST SUITE: None

 

 

명령형API 선언형 API

*명령형 방식

(⎈|kind-myk8s:default) zosys@4:~$ k create namespace test-ns
namespace/test-ns created

#test-namespace.yml
(⎈|kind-myk8s:default) zosys@4:~$ k create namespace test-ns --dry-run=client -o yaml
apiVersion: v1
kind: Namespace
metadata:
  name: test-ns

(⎈|kind-myk8s:default) zosys@4:~$ k create -f test-ns.yaml
namespace/test-ns created

절차적 방식으로 진행되어 일련의 명령어를 순서대로 적용 직접명령 또는 구성파일(yaml)사용

 

*선언형 방식

파일을 사용하여 생성하고, 파일 수정 후 업데이트/동기화 명령을 실행한다. 신규/수정 파일 모두 kubectl apply 명령 사용.

(⎈|kind-myk8s:default) zosys@4:~$ cat test-ns.yaml
apiVersion: v1
kind: Namespace
metadata:
  name: test-ns
  labels:
    namespace: test-ns

(⎈|kind-myk8s:default) zosys@4:~$ k apply -f test-ns.yaml

 

깃옵스 오퍼레이터 구축 실습

#리포지터리 복제
(⎈|kind-myk8s:default) zosys@4:~/gitops$ git clone https://github.com/PacktPublishing/ArgoCD-in-Practice.git
Cloning into 'ArgoCD-in-Practice'...
remote: Enumerating objects: 1261, done.
remote: Counting objects: 100% (125/125), done.
remote: Compressing objects: 100% (32/32), done.
remote: Total 1261 (delta 99), reused 97 (delta 93), pack-reused 1136 (from 1)
Receiving objects: 100% (1261/1261), 22.43 MiB | 11.46 MiB/s, done.


#go 설치 
(⎈|kind-myk8s:default) zosys@4:~$ sudo apt install golang-go -y
[sudo] password for zosys:
Reading package lists... Done
Building dependency tree... Done
Reading state information... Done
The following additional packages will be installed:
  golang-1.22-go golang-1.22-src golang-src libpkgconf3 pkg-config pkgconf pkgconf-bin
Suggested packages:
  bzr | brz mercurial subversion
The following NEW packages will be installed:
  golang-1.22-go golang-1.22-src golang-go golang-src libpkgconf3 pkg-config pkgconf pkgconf-bin
0 upgraded, 8 newly installed, 0 to remove and 33 not upgraded.
Need to get 45.8 MB of archives.
After this operation, 228 MB of additional disk space will be used.
-----------------------------중략--------------------------------

(⎈|kind-myk8s:default) zosys@4:~/gitops/ArgoCD-in-Practice/ch01$ tree basic-gitops-operator
basic-gitops-operator
├── go.mod
├── go.sum
└── main.go

1 directory, 3 files
(⎈|kind-myk8s:default) zosys@4:~/gitops/ArgoCD-in-Practice/ch01$ tree basic-gitops-operator-config/
basic-gitops-operator-config/
├── deployment.yaml
└── namespace.yaml

#실행
(⎈|kind-myk8s:default) zosys@4:~/gitops/ArgoCD-in-Practice/ch01/basic-gitops-operator$ go run main.go
go: downloading github.com/go-git/go-git/v5 v5.4.2
go: downloading github.com/go-git/go-billy/v5 v5.3.1
go: downloading github.com/ProtonMail/go-crypto v0.0.0-20210428141323-04723f9f07d7
go: downloading github.com/sergi/go-diff v1.1.0
go: downloading github.com/imdario/mergo v0.3.12
go: downloading github.com/mitchellh/go-homedir v1.1.0
go: downloading github.com/emirpasic/gods v1.12.0
go: downloading github.com/jbenet/go-context v0.0.0-20150711004518-d14ea06fba99
go: downloading github.com/go-git/gcfg v1.5.0
go: downloading golang.org/x/sys v0.0.0-20210616094352-59db8d763f22
go: downloading github.com/kevinburke/ssh_config v0.0.0-20201106050909-4977a11b4351
go: downloading github.com/xanzy/ssh-agent v0.3.0
go: downloading golang.org/x/crypto v0.0.0-20210421170649-83a5a9bb288b
go: downloading golang.org/x/net v0.0.0-20210520170846-37e1c6afe023
go: downloading gopkg.in/warnings.v0 v0.1.2
------------중략--------------

#생성확인
(⎈|kind-myk8s:default) zosys@4:~$ k get deploy,pod -n nginx
NAME                    READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/nginx   1/1     1            1           43s

NAME                         READY   STATUS    RESTARTS   AGE
pod/nginx-5869d7778c-rqcxl   1/1     Running   0          43s

#강제로 deploy 삭제

(⎈|kind-myk8s:default) zosys@4:~$ k delete deploy -n nginx nginx
deployment.apps "nginx" deleted from nginx namespace

#자동재생성
start repo sync
start manifests apply
deployment.apps/nginx created
namespace/nginx unchanged

#확인
(⎈|kind-myk8s:default) zosys@4:~$ k get deploy,pod -n nginx
NAME                    READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/nginx   1/1     1            1           16s

NAME                         READY   STATUS    RESTARTS   AGE
pod/nginx-5869d7778c-548j8   1/1     Running   0          16s

 

2장. ArgoCD 시작하기

ArgoCD란?

Argo CD는 선언적인 쿠버네티스의 깃옵스 CD Continuous Delivery 도구

 

핵심 개념 및 용어 정리

  • Argo CD는 깃 리포지터리에 선언된 의도한 상태(Helm 차트 등) 를 쿠버네티스 클러스터의
  • 현재 상태와 비교·동기화하는 조정(Reconciling) 루프를 수행한다.이 과정에서 helm install 대신 kubectl apply 를 사용해, 깃옵스의 선언적 배포 원칙을 유지한다.

용어정리.

  • 애플리케이션 application
    • 쿠버네티스 리소스 그룹은 매니페스트에 의해 규정된다. Argo CD에서는 CRD 에서 규정한다.
  • 애플리케이션 소스 타입 application source type
    • 헬름, Kustomize, jsonnet 과 같이 애플리케이션을 구축하는 데 사용하는 도구다.
  • 타깃 상태 target state
    • 애플리케이션의 의도한 상태를 이야기하며 원천 소스인 깃 리포지터리를 의미한다.
  • 현재 상태 live state
    • 애플리케이션의 현재 상태로 쿠버네티스 클러스터에 배포된 상태를 의미한다.
  • 동기화 상태
    • 현재 상태와 타킷 상태가 일치하는지 확인한다.
    • 즉, 쿠버네티스에 배포된 애플리케이션이 깃 리포지터리에서 설명된 의도한 상태와 일치하는지 여부를 확인한다.
  • 동기화 sync
    • 쿠버네티스 클러스터에 변화를 적용해 애플리케이션을 타깃 상태로 변경한다.
  • 동기화 동작 상태 sync operation status
    • 동기화 단계에서 작업이 실패인지 성공인지 여부를 보여준다.
  • 새로고침 refresh
    • 깃 리포지터리의 최신 코드와 현재 상태의 차이점을 비교한다.
  • 서비스 상태 health status
    • 애플리케이션이 요청을 받을 수 있고 운영 중인 상태인지를 말해준다.

 

ArgoCD설치 by Helm

(⎈|kind-myk8s:default) zosys@4:~/ArgoCD/ArgoCD-in-Practice$ k create ns argocd
namespace/argocd created
(⎈|kind-myk8s:default) zosys@4:~/ArgoCD/ArgoCD-in-Practice$ cat <<EOF > argocd-values.yaml
server:
  service:
    type: NodePort
    nodePortHttps: 30002
  extraArgs:
    - --insecure  # HTTPS 대신 HTTP 사용
EOF

(⎈|kind-myk8s:default) zosys@4:~/ArgoCD/ArgoCD-in-Practice$ helm repo add argo https://argoproj.github.io/argo-helm
"argo" has been added to your repositories
(⎈|kind-myk8s:default) zosys@4:~/ArgoCD/ArgoCD-in-Practice$ helm install argocd argo/argo-cd --version 9.0.5 -f argocd-values.yaml --namespace argocd

#구성요소 확인
(⎈|kind-myk8s:N/A) zosys@4:~/ArgoCD$ kubectl get all -n argocd
NAME                                                   READY   STATUS      RESTARTS   AGE
pod/argocd-application-controller-0                    1/1     Running     0          43s
pod/argocd-applicationset-controller-bbff79c6f-2v5vb   1/1     Running     0          43s
pod/argocd-dex-server-6877ddf4f8-rmkwk                 1/1     Running     0          43s
pod/argocd-notifications-controller-7b5658fc47-2hbpf   1/1     Running     0          43s
-----------------중략-------------------------------------

(⎈|kind-myk8s:N/A) zosys@4:~/ArgoCD$ kubectl get pod,svc,ep,secret,cm -n argocd
NAME                                                   READY   STATUS      RESTARTS   AGE
pod/argocd-application-controller-0                    1/1     Running     0          50s
pod/argocd-applicationset-controller-bbff79c6f-2v5vb   1/1     Running     0          50s
pod/argocd-dex-server-6877ddf4f8-rmkwk                 1/1     Running     0          50s
pod/argocd-notifications-controller-7b5658fc47-2hbpf   1/1     Running     0          50s
pod/argocd-redis-7d948674-jrvxz                        1/1     Running     0          50s
pod/argocd-redis-secret-init-9k4cd                     0/1     Completed   0          81s
pod/argocd-repo-server-7679dc55f5-gq6k7                1/1     Running     0          50s
pod/argocd-server-787fb5f956-xvcvr                     1/1     Running     0          50s

#crd확인
(⎈|kind-myk8s:N/A) zosys@4:~/ArgoCD$ k get crd | grep argo
applications.argoproj.io      2025-11-03T11:44:33Z
applicationsets.argoproj.io   2025-11-03T11:44:33Z
appprojects.argoproj.io       2025-11-03T11:44:33Z
(⎈|kind-myk8s:N/A) zosys@4:~/ArgoCD$ kubectl get appproject -n argocd -o yaml
apiVersion: v1
items:
- apiVersion: argoproj.io/v1alpha1
  kind: AppProject
  metadata:
    creationTimestamp: "2025-11-03T11:44:35Z"
    generation: 1
    name: default
    namespace: argocd
    resourceVersion: "846"
    uid: d3b6435f-f139-476d-a25c-911852de4db8


#sa확인
(⎈|kind-myk8s:N/A) zosys@4:~/ArgoCD$ k get sa -n argocd
NAME                               SECRETS   AGE
argocd-application-controller      0         4m8s
argocd-applicationset-controller   0         4m8s
argocd-dex-server                  0         4m8s
argocd-notifications-controller    0         4m8s
argocd-redis-secret-init           0         4m39s
argocd-repo-server                 0         4m8s
argocd-server                      0         4m8s
default                            0         5m2s

#rolesum확인 ( rolesum은 krew를 이용하여 설치하면된다.)
(⎈|kind-myk8s:N/A) zosys@4:~/ArgoCD$ k rolesum -n argocd argocd-server
ServiceAccount: argocd/argocd-server
Secrets:

Policies:
• [RB] argocd/argocd-server ⟶  [R] argocd/argocd-server
  Resource                     Name  Exclude  Verbs  G L W C U P D DC
  applications.argoproj.io     [*]     [-]     [-]   ✔ ✔ ✔ ✔ ✔ ✔ ✔ ✖
  applicationsets.argoproj.io  [*]     [-]     [-]   ✔ ✔ ✔ ✔ ✔ ✔ ✔ ✖
  appprojects.argoproj.io      [*]     [-]     [-]   ✔ ✔ ✔ ✔ ✔ ✔ ✔ ✖
  configmaps                   [*]     [-]     [-]   ✔ ✔ ✔ ✔ ✔ ✔ ✔ ✖
  events                       [*]     [-]     [-]   ✖ ✔ ✖ ✔ ✖ ✖ ✖ ✖
  secrets                      [*]     [-]     [-]   ✔ ✔ ✔ ✔ ✔ ✔ ✔ ✖


• [CRB] */argocd-server ⟶  [CR] */argocd-server
  Resource                     Name  Exclude  Verbs  G L W C U P D DC
  *.*                          [*]     [-]     [-]   ✔ ✖ ✖ ✖ ✖ ✔ ✔ ✖
  applications.argoproj.io     [*]     [-]     [-]   ✔ ✔ ✔ ✖ ✔ ✖ ✖ ✖
  applicationsets.argoproj.io  [*]     [-]     [-]   ✔ ✔ ✔ ✖ ✔ ✖ ✖ ✖
  events                       [*]     [-]     [-]   ✖ ✔ ✖ ✔ ✖ ✖ ✖ ✖
  jobs.batch                   [*]     [-]     [-]   ✖ ✖ ✖ ✔ ✖ ✖ ✖ ✖
  pods                         [*]     [-]     [-]   ✔ ✖ ✖ ✖ ✖ ✖ ✖ ✖
  pods/log                     [*]     [-]     [-]   ✔ ✖ ✖ ✖ ✖ ✖ ✖ ✖
  workflows.argoproj.io        [*]     [-]     [-]   ✖ ✖ ✖ ✔ ✖ ✖ ✖ ✖

#최초 패스워드 확인
(⎈|kind-myk8s:N/A) zosys@4:~/ArgoCD$ kubectl -n argocd get secret argocd-initial-admin-secret -o jsonpath="{.data.password}" | base64 -d ;echo
9-R6i9SHluIOfzEz

 

어플리케이션 실행하기

(⎈|kind-myk8s:N/A) zosys@4:~/ArgoCD$ cat <<EOF | kubectl apply -f -
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
  name: guestbook
  namespace: argocd
  finalizers:
  - resources-finalizer.argocd.argoproj.io
spec:
  project: default
  source:
    helm:
      valueFiles:
      - values.yaml
    path: helm-guestbook
    repoURL: https://github.com/argoproj/argocd-example-apps
    targetRevision: HEAD
  syncPolicy:
    automated:
      enabled: true
      prune: true
      selfHeal: true
    syncOptions:
    - CreateNamespace=true
EOF server: https://kubernetes.default.svc
Warning: metadata.finalizers: "resources-finalizer.argocd.argoproj.io": prefer a domain-qualified finalizer name including a path (/) to avoid accidental conflicts with other finalizer writers
application.argoproj.io/guestbook created


(⎈|kind-myk8s:N/A) zosys@4:~/ArgoCD$ kubectl get pod,svc,ep -n guestbook
NAME                                            READY   STATUS    RESTARTS   AGE
pod/guestbook-helm-guestbook-6585c766d6-cd6zl   1/1     Running   0          47s

NAME                               TYPE        CLUSTER-IP    EXTERNAL-IP   PORT(S)   AGE
service/guestbook-helm-guestbook   ClusterIP   10.96.17.19   <none>        80/TCP    47s

NAME                                 ENDPOINTS        AGE
endpoints/guestbook-helm-guestbook   10.244.0.14:80   47s

(⎈|kind-myk8s:N/A) zosys@4:~/ArgoCD$ kubectl patch svc -n guestbook guestbook-helm-guestbook -p '{"spec":{"type":"NodePort","ports":[{"port":80,"targetPort":80,"nodePort":30003}]}}'
service/guestbook-helm-guestbook patched
^[[A(⎈|kind-myk8s:N/A) zosys@4:~/ArgoCD$ kubectl get svc -n guestbook
NAME                       TYPE       CLUSTER-IP    EXTERNAL-IP   PORT(S)        AGE
guestbook-helm-guestbook   NodePort   10.96.17.19   <none>        80:30003/TCP   3m20s

(⎈|kind-myk8s:N/A) zosys@4:~/ArgoCD$ kubectl get svc -n guestbook
NAME                       TYPE        CLUSTER-IP    EXTERNAL-IP   PORT(S)   AGE
guestbook-helm-guestbook   ClusterIP   10.96.17.19   <none>        80/TCP    3m31s

#확인 후 삭제진행
(⎈|kind-myk8s:N/A) zosys@4:~/ArgoCD$ kubectl delete applications -n argocd guestbook
application.argoproj.io "guestbook" deleted from argocd namespace

 

 

Argo CLI 설치

(⎈|kind-myk8s:N/A) zosys@4:~/ArgoCD$ VERSION=$(curl -L -s https://raw.githubusercontent.com/argoproj/argo-cd/stable/VERSION)
(⎈|kind-myk8s:N/A) zosys@4:~/ArgoCD$ curl -sSL -o argocd-linux-amd64 https://github.com/argoproj/argo-cd/releases/download/v$VERSION/argocd-linux-amd64
(⎈|kind-myk8s:N/A) zosys@4:~/ArgoCD$ sudo install -m 555 argocd-linux-amd64 /usr/local/bin/argocd
(⎈|kind-myk8s:N/A) zosys@4:~/ArgoCD$ rm argocd-linux-amd64

(⎈|kind-myk8s:N/A) zosys@4:~/ArgoCD$ argocd version --client
argocd: v3.1.9+8665140
  BuildDate: 2025-10-17T22:07:41Z
  GitCommit: 8665140f96f6b238a20e578dba7f9aef91ddac51
  GitTreeState: clean
  GoVersion: go1.24.6
  Compiler: gc
  Platform: linux/amd64

 

Argocd CLI로 정보확인하기

(⎈|kind-myk8s:N/A) zosys@4:~/ArgoCD$ argocd login 127.0.0.1:30002 --plaintext
Username: admin
Password:
\'admin:login' logged in successfully
Context '127.0.0.1:30002' updated

#로그인후 정보확인
(⎈|kind-myk8s:N/A) zosys@4:~/ArgoCD$ argocd account list
NAME   ENABLED  CAPABILITIES
admin  true     login
(⎈|kind-myk8s:N/A) zosys@4:~/ArgoCD$ argocd proj list
NAME     DESCRIPTION  DESTINATIONS  SOURCES  CLUSTER-RESOURCE-WHITELIST  NAMESPACE-RESOURCE-BLACKLIST  SIGNATURE-KEYS  ORPHANED-RESOURCES  DESTINATION-SERVICE-ACCOUNTS
default               *,*           *        */*                         <none>                        <none>          disabled            <none>
(⎈|kind-myk8s:N/A) zosys@4:~/ArgoCD$ argocd repo list
TYPE  NAME  REPO  INSECURE  OCI  LFS  CREDS  STATUS  MESSAGE  PROJECT

 

Argo CLI 로 애플리케이션 배포하기

(⎈|kind-myk8s:N/A) zosys@4:~/ArgoCD$ argocd app create guestbook --repo https://github.com/argoproj/argocd-example-apps.git --path helm-guestbook \
  --dest-server https://kubernetes.default.svc --dest-namespace guestbook --values values.yaml
application 'guestbook' created

(⎈|kind-myk8s:N/A) zosys@4:~/ArgoCD$ argocd app list
NAME              CLUSTER                         NAMESPACE  PROJECT  STATUS     HEALTH   SYNCPOLICY  CONDITIONS  REPO                                                 PATH            TARGET
argocd/guestbook  https://kubernetes.default.svc  guestbook  default  OutOfSync  Missing  Manual      <none>      https://github.com/argoproj/argocd-example-apps.git  helm-guestbook
(⎈|kind-myk8s:N/A) zosys@4:~/ArgoCD$ argocd app get argocd/guestbook
Name:               argocd/guestbook
Project:            default
Server:             https://kubernetes.default.svc
Namespace:          guestbook
URL:                https://argocd.example.com/applications/guestbook
Source:
- Repo:             https://github.com/argoproj/argocd-example-apps.git
  Target:
  Path:             helm-guestbook
  Helm Values:      values.yaml
SyncWindow:         Sync Allowed
Sync Policy:        Manual
Sync Status:        OutOfSync from  (0d521c6)
Health Status:      Missing

GROUP  KIND        NAMESPACE  NAME                      STATUS     HEALTH   HOOK  MESSAGE
       Service     guestbook  guestbook-helm-guestbook  OutOfSync  Missing
apps   Deployment  guestbook  guestbook-helm-guestbook  OutOfSync  Missing

#직접 Sync진행하기.
(⎈|kind-myk8s:N/A) zosys@4:~/ArgoCD$ argocd app sync argocd/guestbook
TIMESTAMP                  GROUP        KIND   NAMESPACE                  NAME        STATUS    HEALTH        HOOK  MESSAGE
2025-11-03T21:09:56+09:00            Service   guestbook  guestbook-helm-guestbook  OutOfSync  Missing
2025-11-03T21:09:56+09:00   apps  Deployment   guestbook  guestbook-helm-guestbook  OutOfSync  Missing
2025-11-03T21:09:56+09:00   apps  Deployment   guestbook  guestbook-helm-guestbook  OutOfSync  Missing              deployment.apps/guestbook-helm-guestbook created
2025-11-03T21:09:56+09:00            Service   guestbook  guestbook-helm-guestbook  OutOfSync  Missing              service/guestbook-helm-guestbook created
2025-11-03T21:09:56+09:00            Service   guestbook  guestbook-helm-guestbook    Synced  Healthy                  service/guestbook-helm-guestbook created
2025-11-03T21:09:56+09:00   apps  Deployment   guestbook  guestbook-helm-guestbook    Synced  Progressing              deployment.apps/guestbook-helm-guestbook created
------------------------중략-----------------------

#삭제 진행
(⎈|kind-myk8s:N/A) zosys@4:~/ArgoCD$ argocd app delete argocd/guestbook
Are you sure you want to delete 'argocd/guestbook' and all its resources? [y/n] y
application 'argocd/guestbook' deleted

 

 

 

Argocd-autopilot cli 설치

(⎈|kind-myk8s:N/A) zosys@4:~/ArgoCD$ VERSION=$(curl --silent "https://api.github.com/repos/argoproj-labs/argocd-autopilot/releases/latest" | grep '"tag_name"' | sed -E 's/.*"([^"]+)".*/\1/')
(⎈|kind-myk8s:N/A) zosys@4:~/ArgoCD$ curl -L --output - https://github.com/argoproj-labs/argocd-autopilot/releases/download/"$VERSION"/argocd-autopilot-linux-amd64.tar.gz | tar zx
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0
100 71.5M  100 71.5M    0     0  7140k      0  0:00:10  0:00:10 --:--:-- 8974k

(⎈|kind-myk8s:N/A) zosys@4:~/ArgoCD$ sudo mv ./argocd-autopilot-* /usr/local/bin/argocd-autopilot
[sudo] password for zosys:
(⎈|kind-myk8s:N/A) zosys@4:~/ArgoCD$ argocd-autopilot version
v0.4.20

 

Getting Started : Bootstrap Argo-CD

(⎈|kind-myk8s:N/A) zosys@4:~/ArgoCD$ export GIT_TOKEN=ghp_.....
(⎈|kind-myk8s:N/A) zosys@4:~/ArgoCD$ export GIT_REPO=https://github.com/zeroone5727/autopilot.git

(⎈|kind-myk8s:N/A) zosys@4:~/ArgoCD$ argocd-autopilot repo bootstrap
INFO cloning repo: https://github.com/zeroone5727/autopilot.git
INFO empty repository, initializing a new one with specified remote
WARNING --provider not specified, assuming provider from url: github
WARNING --provider not specified, assuming provider from url: github
INFO using revision: "", installation path: ""
INFO using context: "kind-myk8s", namespace: "argocd"
INFO applying bootstrap manifests to cluster...
namespace/argocd created
Warning: resource customresourcedefinitions/applications.argoproj.io is missing the kubectl.kubernetes.io/last-applied-configuration annotation which is required by apply apply. apply apply should only be used on resources created declaratively by either apply create --save-config or apply apply. The missing annotation will be patched automatically.
customresourcedefinition.apiextensions.k8s.io/applications.argoproj.io configured
Warning: resource customresourcedefinitions/applicationsets.argoproj.io is missing the kubectl.kubernetes.io/last-applied-configuration annotation which is required by apply apply. apply apply should only be used on resources created declaratively by either apply create --save-config or apply apply. The missing annotation will be patched automatically.
customresourcedefinition.apiextensions.k8s.io/applicationsets.argoproj.io configured
Warning: resource customresourcedefinitions/appprojects.argoproj.io is missing the kubectl.kubernetes.io/last-applied-configuration annotation which is required by apply apply. apply apply should only be used on resources created declaratively by either apply create --save-config or apply apply. The missing annotation will be patched automatically.
customresourcedefinition.apiextensions.k8s.io/appprojects.argoproj.io configured
serviceaccount/argocd-application-controller created
serviceaccount/argocd-applicationset-controller created
------------------중략---------------------

#레포가 자신의 깃주소로 되어있음.
(⎈|kind-myk8s:N/A) zosys@4:~/ArgoCD$ kubectl get applications.argoproj.io -n argocd autopilot-bootstrap -o yaml | grep repoURL
      {"apiVersion":"argoproj.io/v1alpha1","kind":"Application","metadata":{"annotations":{},"creationTimestamp":null,"finalizers":["resources-finalizer.argocd.argoproj.io"],"labels":{"app.kubernetes.io/managed-by":"argocd-autopilot","app.kubernetes.io/name":"autopilot-bootstrap"},"name":"autopilot-bootstrap","namespace":"argocd"},"spec":{"destination":{"namespace":"argocd","server":"https://kubernetes.default.svc"},"ignoreDifferences":[{"group":"argoproj.io","jsonPointers":["/status"],"kind":"Application"}],"project":"default","source":{"path":"bootstrap","repoURL":"https://github.com/zeroone5727/autopilot.git"},"syncPolicy":{"automated":{"allowEmpty":true,"prune":true,"selfHeal":true},"syncOptions":["allowEmpty=true"]}},"status":{"health":{},"sourceHydrator":{},"summary":{},"sync":{"comparedTo":{"destination":{},"source":{"repoURL":""}},"status":""}}}
    repoURL: https://github.com/zeroone5727/autopilot.git
      repoURL: https://github.com/zeroone5727/autopilot.git
        repoURL: https://github.com/zeroone5727/autopilot.git
        repoURL: https://github.com/zeroone5727/autopilot.git
        
#포트포워딩 설정 후 아래와 같이 argocd 접속
(⎈|kind-myk8s:N/A) zosys@4:~$ kubectl port-forward -n argocd svc/argocd-server 8080:80

 

 

Project생성 Application 추가

Project생성

(⎈|kind-myk8s:N/A) zosys@4:~/ArgoCD$ argocd-autopilot project create dev
INFO cloning git repository: https://github.com/zeroone5727/autopilot.git
Enumerating objects: 17, done.
Counting objects: 100% (17/17), done.
Compressing objects: 100% (13/13), done.
Total 17 (delta 1), reused 17 (delta 1), pack-reused 0 (from 0)
WARNING --provider not specified, assuming provider from url: github
INFO using revision: "", installation path: "/"
INFO pushing new project manifest to repo
WARNING --provider not specified, assuming provider from url: github
INFO project created: 'dev'

(⎈|kind-myk8s:N/A) zosys@4:~/ArgoCD$ argocd-autopilot project create prd
INFO cloning git repository: https://github.com/zeroone5727/autopilot.git
Enumerating objects: 18, done.
Counting objects: 100% (18/18), done.
Compressing objects: 100% (15/15), done.
Total 18 (delta 2), reused 17 (delta 1), pack-reused 0 (from 0)
WARNING --provider not specified, assuming provider from url: github
INFO using revision: "", installation path: "/"
INFO pushing new project manifest to repo
WARNING --provider not specified, assuming provider from url: github
INFO project created: 'prd'

 

Appliaction 추가

(⎈|kind-myk8s:N/A) zosys@4:~/ArgoCD$  argocd-autopilot app create hello-world1 --app github.com/argoproj-labs/argocd-autopilot/examples/demo-app/ -p dev --type kustomize
INFO cloning git repository: https://github.com/zeroone5727/autopilot.git
Enumerating objects: 19, done.
Counting objects: 100% (19/19), done.
Compressing objects: 100% (16/16), done.
Total 19 (delta 3), reused 17 (delta 1), pack-reused 0 (from 0)
WARNING --provider not specified, assuming provider from url: github
INFO using revision: "", installation path: "/"
INFO committing changes to gitops repo...
WARNING --provider not specified, assuming provider from url: github
INFO installed application: hello-world1

(⎈|kind-myk8s:N/A) zosys@4:~/ArgoCD$ argocd-autopilot app create hello-world2 --app github.com/argoproj-labs/argocd-autopilot/examples/demo-app/ -p prd --type kustomize
INFO cloning git repository: https://github.com/zeroone5727/autopilot.git
Enumerating objects: 26, done.
Counting objects: 100% (26/26), done.
Compressing objects: 100% (22/22), done.
Total 26 (delta 4), reused 23 (delta 1), pack-reused 0 (from 0)
WARNING --provider not specified, assuming provider from url: github
INFO using revision: "", installation path: "/"
INFO committing changes to gitops repo...
WARNING --provider not specified, assuming provider from url: github
INFO installed application: hello-world2

 

ArgoCD 동기화

  • 동기화(Sync) 는 Kubernetes 클러스터에 변경사항을 적용해 애플리케이션을 원하는 상태(Target State) 로 만드는 과정이다.
  • 이 과정은 여러 단계별 Hook(리소스 훅) 으로 나뉘며, 특정 시점에 원하는 작업을 실행할 수 있다.

 

동기화 윈도우 확인

(⎈|kind-myk8s:N/A) zosys@4:~/ArgoCD$ kubectl apply -f app-project.yaml -n argocd
Warning: resource appprojects/default is missing the kubectl.kubernetes.io/last-applied-configuration annotation which is required by kubectl apply. kubectl apply should only be used on resources created declaratively by either kubectl create --save-config or kubectl apply. The missing annotation will be patched automatically.
appproject.argoproj.io/default configured

 

UI상에서 확인

 

 

3장 ArgoCD 운영

kind k8s 배포 ( 고가용성 구성을 위한 worker node 3EA)

(⎈|N/A:N/A) zosys@4:~/ArgoCD$ kind create cluster --name myk8s --image kindest/node:v1.32.8 --config - <<EOF
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
- role: control-plane
  extraPortMappings:
  - containerPort: 30000
    hostPort: 30000
  - containerPort: 30001
    hostPort: 30001
  - containerPort: 30002
    hostPort: 30002
  - containerPort: 30003
    hostPort: 30003
- role: worker
- role: worker
- role: worker
EOF
Creating cluster "myk8s" ...


(⎈|kind-myk8s:N/A) zosys@4:~/ArgoCD$ helm repo add geek-cookbook https://geek-cookbook.github.io/charts/
"geek-cookbook" already exists with the same configuration, skipping
(⎈|kind-myk8s:N/A) zosys@4:~/ArgoCD$ helm install kube-ops-view geek-cookbook/kube-ops-view --version 1.2.2 --set service.main.type=NodePort,service.main.ports.http.nodePort=30001 --set env.TZ="Asia/Seoul" --namespace kube-system
NAME: kube-ops-view
LAST DEPLOYED: Mon Nov  3 22:10:40 2025
NAMESPACE: kube-system
STATUS: deployed
REVISION: 1
TEST SUITE: None

 

HA모드 설치

(⎈|kind-myk8s:N/A) zosys@4:~/ArgoCD$ mkdir -p resources
(⎈|kind-myk8s:N/A) zosys@4:~/ArgoCD$ cat << EOF > resources/namespace.yaml
apiVersion: v1
kind: Namespace
metadata:
  name: argocd
EOF
(⎈|kind-myk8s:N/A) zosys@4:~/ArgoCD$ kubectl apply -f resources/namespace.yaml
namespace/argocd created
(⎈|kind-myk8s:N/A) zosys@4:~/ArgoCD$ wget https://raw.githubusercontent.com/argoproj/argo-cd/refs/heads/master/manifests/ha/install.yaml
--2025-11-03 22:11:32--  https://raw.githubusercontent.com/argoproj/argo-cd/refs/heads/master/manifests/ha/install.yaml
Resolving raw.githubusercontent.com (raw.githubusercontent.com)... 185.199.109.133, 185.199.108.133, 185.199.111.133, ...
Connecting to raw.githubusercontent.com (raw.githubusercontent.com)|185.199.109.133|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 1516613 (1.4M) [text/plain]
Saving to: ‘install.yaml’

install.yaml                  100%[=================================================>]   1.45M  7.34MB/s    in 0.2s

2025-11-03 22:11:32 (7.34 MB/s) - ‘install.yaml’ saved [1516613/1516613]

(⎈|kind-myk8s:N/A) zosys@4:~/ArgoCD$ mv install.yaml resources/
(⎈|kind-myk8s:N/A) zosys@4:~/ArgoCD$ kubectl apply -f resources/install.yaml -n argocd
customresourcedefinition.apiextensions.k8s.io/applications.argoproj.io created

(⎈|kind-myk8s:N/A) zosys@4:~/ArgoCD$ k get pod -n argocd
NAME                                                READY   STATUS     RESTARTS   AGE
argocd-application-controller-0                     1/1     Running    0          93s
argocd-applicationset-controller-694b4774cd-2tdnx   1/1     Running    0          93s
argocd-dex-server-66585dc685-vd28n                  1/1     Running    0          93s
argocd-notifications-controller-7c584f65cc-pfpj6    1/1     Running    0          93s
argocd-redis-ha-haproxy-7487b954d9-85v7m            1/1     Running    0          93s
argocd-redis-ha-haproxy-7487b954d9-f46m5            1/1     Running    0          93s
argocd-redis-ha-haproxy-7487b954d9-m9qpn            1/1     Running    0          93s
argocd-redis-ha-server-0                            0/3     Init:0/1   0          93s
argocd-repo-server-74b54f7cb-mnqmf                  1/1     Running    0          93s
argocd-repo-server-74b54f7cb-mtnqf                  1/1     Running    0          93s
argocd-server-8b767f58c-jjskk                       0/1     Running    0          93s
argocd-server-8b767f58c-n48bb                       1/1     Running    0          93s

 

ArgoCD 자체 관리

cat <<EOF | kubectl apply -f -
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
  name: argocd
  namespace: argocd
spec:
  project: default
  source:
    path: resources
    repoURL: https://github.com/zeroone5727/my-sample-app
    targetRevision: main
  syncPolicy:
    automated: {}
  destination:
    namespace: argocd
    server: https://kubernetes.default.svc
EOF

 

ArgoCD 설정변경

(⎈|kind-myk8s:N/A) zosys@4:~/ArgoCD/resources$ kubectl get networkpolicies.networking.k8s.io -n argocd
NAME                                              POD-SELECTOR                                              AGE
argocd-application-controller-network-policy      app.kubernetes.io/name=argocd-application-controller      38m
argocd-applicationset-controller-network-policy   app.kubernetes.io/name=argocd-applicationset-controller   38m
argocd-dex-server-network-policy                  app.kubernetes.io/name=argocd-dex-server                  38m
argocd-notifications-controller-network-policy    app.kubernetes.io/name=argocd-notifications-controller    38m
argocd-redis-ha-proxy-network-policy              app.kubernetes.io/name=argocd-redis-ha-haproxy            38m
argocd-redis-ha-server-network-policy             app.kubernetes.io/name=argocd-redis-ha                    38m
argocd-repo-server-network-policy                 app.kubernetes.io/name=argocd-repo-server                 38m
argocd-server-network-policy                      app.kubernetes.io/name=argocd-server                      38m

(⎈|kind-myk8s:N/A) zosys@4:~/ArgoCD/resources/resources$ vi install.yaml

#네트워크 폴리시 삭제후 원격저장소 푸시
(⎈|kind-myk8s:N/A) zosys@4:~/ArgoCD/resources/resources$ git add . && git commit -m "Delete Network Policy Resource" && git push -u origin main
[main e877c84] Delete Network Policy Resource
 1 file changed, 227 deletions(-)
Username for 'https://github.com': zeroone5727
Password for 'https://zeroone5727@github.com':
Enumerating objects: 7, done.
Counting objects: 100% (7/7), done.
Delta compression using up to 8 threads
Compressing objects: 100% (3/3), done.
Writing objects: 100% (4/4), 412 bytes | 412.00 KiB/s, done.
Total 4 (delta 1), reused 0 (delta 0), pack-reused 0
remote: Resolving deltas: 100% (1/1), completed with 1 local object.
To https://github.com/zeroone5727/my-sample-app.git
   a061807..e877c84  main -> main
branch 'main' set up to track 'origin/main'.

#삭제되는것을 확인할수있다.
(⎈|kind-myk8s:N/A) zosys@4:~/ArgoCD/resources/resources$ watch -d kubectl get networkpolicies.networking.k8s.io -n argocd
Every 2.0s: kubectl get networkpolicies.networking.k8s.io -n argocd                          4: Mon Nov  3 22:53:55 2025

No resources found in argocd namespace.

 

 

관찰가능성 실습

kube-prometheus-stack 설치

#REPO추가 
(⎈|kind-myk8s:N/A) zosys@4:~/ArgoCD/resources/resources$ helm repo add prometheus-community https://prometheus-community.github.io/helm-charts
"prometheus-community" has been added to your repositories

#파라미터 파일 생성
(⎈|kind-myk8s:N/A) zosys@4:~/ArgoCD/resources/resources$ cat <<EOT > monitor-values.yaml
prometheus:
  prometheusSpec:
    scrapeInterval: "15s"
    evaluationInterval: "15s"
  service:
    type: NodePort
    nodePort: 30002

grafana:
  defaultDashboardsTimezone: Asia/Seoul
  adminPassword: prom-operator
  service:
    type: NodePort
    nodePort: 30003

alertmanager:
  enabled: false
defaultRules:
  create: false
prometheus-windows-exporter:
  prometheus:
    monitor:
      enabled: false
EOT

#배포
(⎈|kind-myk8s:N/A) zosys@4:~/ArgoCD/resources/resources$ helm install kube-prometheus-stack prometheus-community/kube-prometheus-stack --version 75.15.1 \
-f monitor-values.yaml --create-namespace --namespace monitoring
NAME: kube-prometheus-stack
LAST DEPLOYED: Mon Nov  3 23:00:04 2025
NAMESPACE: monitoring
STATUS: deployed
REVISION: 1
NOTES:

접속 확인

 

#설치 확인 및 각종 정보 확인
(⎈|kind-myk8s:N/A) zosys@4:~/ArgoCD/resources/resources$ helm list -n monitoring
NAME                    NAMESPACE       REVISION        UPDATED                                 STATUS          CHART                           APP VERSION
kube-prometheus-stack   monitoring      1               2025-11-03 23:00:04.046910353 +0900 KST deployed        kube-prometheus-stack-75.15.1   v0.83.0


(⎈|kind-myk8s:N/A) zosys@4:~/ArgoCD/resources/resources$ kubectl get prometheus,servicemonitors -n monitoring
NAME                                                                VERSION   DESIRED   READY   RECONCILED   AVAILABLE   AGE
prometheus.monitoring.coreos.com/kube-prometheus-stack-prometheus   v3.5.0    1         1       True         True        2m31s

NAME                                                                                  AGE
servicemonitor.monitoring.coreos.com/kube-prometheus-stack-apiserver                  2m31s
servicemonitor.monitoring.coreos.com/kube-prometheus-stack-coredns                    2m31s
servicemonitor.monitoring.coreos.com/kube-prometheus-stack-grafana                    2m31s
servicemonitor.monitoring.coreos.com/kube-prometheus-stack-kube-controller-manager    2m31s
servicemonitor.monitoring.coreos.com/kube-prometheus-stack-kube-etcd                  2m31s
servicemonitor.monitoring.coreos.com/kube-prometheus-stack-kube-proxy                 2m31s
servicemonitor.monitoring.coreos.com/kube-prometheus-stack-kube-scheduler             2m31s
servicemonitor.monitoring.coreos.com/kube-prometheus-stack-kube-state-metrics         2m31s
servicemonitor.monitoring.coreos.com/kube-prometheus-stack-kubelet                    2m31s
servicemonitor.monitoring.coreos.com/kube-prometheus-stack-operator                   2m31s
servicemonitor.monitoring.coreos.com/kube-prometheus-stack-prometheus                 2m31s
servicemonitor.monitoring.coreos.com/kube-prometheus-stack-prometheus-node-exporter   2m31s

(⎈|kind-myk8s:N/A) zosys@4:~/ArgoCD/resources/resources$ kubectl exec -it sts/prometheus-kube-prometheus-stack-prometheus -n monitoring -c prometheus -- prometheus --version
prometheus, version 3.5.0 (branch: HEAD, revision: 8be3a9560fbdd18a94dedec4b747c35178177202)
  build user:       root@4451b64cb451
  build date:       20250714-16:15:23
  go version:       go1.24.5
  platform:         linux/amd64
  tags:             netgo,builtinassets

(⎈|kind-myk8s:N/A) zosys@4:~/ArgoCD/resources/resources$ kubectl get servicemonitors.monitoring.coreos.com -n monitoring kube-prometheus-stack-apiserver -o yaml | grep labels: -A10
  labels:
    app: kube-prometheus-stack-apiserver
    app.kubernetes.io/instance: kube-prometheus-stack
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/part-of: kube-prometheus-stack
    app.kubernetes.io/version: 75.15.1
    chart: kube-prometheus-stack-75.15.1
    heritage: Helm
    release: kube-prometheus-stack
  name: kube-prometheus-stack-apiserver
  namespace: monitoring

 

ArgoCD구성요소에 대한 ServiceMonitor생성하기

#테스트파드구성
(⎈|kind-myk8s:N/A) zosys@4:~/ArgoCD/resources/resources$ cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: Pod
metadata:
  name: nginx
  labels:
    app: nginx
spec:
  containers:
  - name: nginx
    image: nginx:alpine
    ports:
    - containerPort: 80
EOF
pod/nginx created

#메트릭 호출
(⎈|kind-myk8s:N/A) zosys@4:~/ArgoCD/resources/resources$ kubectl exec -it -n default nginx -- curl argocd-metrics.argocd.svc:8082/metrics
# HELP argocd_app_info Information about application.
# TYPE argocd_app_info gauge
argocd_app_info{autosync_enabled="true",dest_namespace="argocd",dest_server="https://kubernetes.default.svc",health_status="Healthy",name="argocd",namespace="argocd",operation="",project="default",repo="https://github.com/zeroone5727/my-sample-app",sync_status="Synced"} 1
# HELP argocd_app_k8s_request_total Number of kubernetes requests executed during application reconciliation.
# TYPE argocd_app_k8s_request_total counter
argocd_app_k8s_request_total{dry_run="false",name="argocd",namespace="argocd",project="default",resource_kind="api",resource_namespace="",response_code="200",server="https://10.96.0.1:443",verb="Get"} 10
----------------------중략-------------------------


(⎈|kind-myk8s:N/A) zosys@4:~/ArgoCD/resources/resources$ kubectl get svc,ep -n argocd -l app.kubernetes.io/name=argocd-metrics
NAME                     TYPE        CLUSTER-IP    EXTERNAL-IP   PORT(S)    AGE
service/argocd-metrics   ClusterIP   10.96.212.0   <none>        8082/TCP   64m

NAME                       ENDPOINTS         AGE
endpoints/argocd-metrics   10.244.2.5:8082   64m

(⎈|kind-myk8s:N/A) zosys@4:~/ArgoCD/resources/resources$ kubectl get pod -n argocd -l app.kubernetes.io/name=argocd-application-controller
NAME                              READY   STATUS    RESTARTS   AGE
argocd-application-controller-0   1/1     Running   0          65m

#Service Monitor 생성
(⎈|kind-myk8s:N/A) zosys@4:~/ArgoCD/resources/resources$ cat <<EOF | kubectl apply -f -
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
  name: argocd-metrics
  namespace: monitoring
  labels:
    release: kube-prometheus-stack
spec:
  selector:
    matchLabels:
      app.kubernetes.io/name: argocd-metrics
  endpoints:
    - port: metrics
  namespaceSelector:
    matchNames:
      - argocd
EOF
servicemonitor.monitoring.coreos.com/argocd-metrics created

#argocd-server 메트릭호출
(⎈|kind-myk8s:N/A) zosys@4:~/ArgoCD/resources/resources$ kubectl exec -it -n default nginx -- curl argocd-server-metrics.argocd.svc:8083/metrics | more
# HELP argocd_info ArgoCD version information
# TYPE argocd_info gauge
argocd_info{version="v3.3.0+4ea2768"} 1                                                                                                         # HELP argocd_kubectl_rate_limiter_duration_seconds Kubectl rate limiter latency
# TYPE argocd_kubectl_rate_limiter_duration_seconds histogram
argocd_kubectl_rate_limiter_duration_seconds_bucket{host="10.96.0.1:443",verb="Get",le="0.005"} 4
-------------------------------------중략--------------------------------

#전체 서비스모니터 생성완료
(⎈|kind-myk8s:N/A) zosys@4:~/ArgoCD/resources/resources$ kubectl get servicemonitors -n monitoring | grep argocd
argocd-applicationset-controller-metrics         5s
argocd-dex-server                                5s
argocd-metrics                                   4m11s
argocd-notifications-controller                  5s
argocd-redis-haproxy-metrics                     5s
argocd-repo-server-metrics                       14s
argocd-server-metrics                            21s

 

 

그라파나 대시보드 추가 및 확인

cat <<EOF | kubectl apply -f -
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
  name: guestbook
  namespace: argocd
  finalizers:
  - resources-finalizer.argocd.argoproj.io
spec:
  project: default
  source:
    helm:
      valueFiles:
      - values.yaml
    path: helm-guestbook
    repoURL: https://github.com/argoproj/argocd-example-apps
    targetRevision: HEAD
  syncPolicy:
    automated:
      enabled: true
      prune: true
      selfHeal: true
    syncOptions:
    - CreateNamespace=true
  destination:
    namespace: guestbook
    server: https://kubernetes.default.svc
EOF

 

 

백업 및 복원

백업(argocd cli활용)

#패스워드 확인
(⎈|kind-myk8s:N/A) zosys@4:~/ArgoCD/resources/resources$ kubectl -n argocd get secret argocd-initial-admin-secret -o jsonpath="{.data.password}" | base64 -d ;echo
#패스워드설정
(⎈|kind-myk8s:N/A) zosys@4:~/ArgoCD/resources/resources$ ARGOPW=""

#CLI로그인
(⎈|kind-myk8s:N/A) zosys@4:~/ArgoCD/resources/resources$ argocd login localhost:8080  --username admin --password $ARGOPW --insecure
'admin:login' logged in successfully
Context 'localhost:8080' updated

(⎈|kind-myk8s:N/A) zosys@4:~/ArgoCD/resources/resources$ argocd cluster list
SERVER                          NAME        VERSION  STATUS      MESSAGE  PROJECT
https://kubernetes.default.svc  in-cluster  1.32     Successful
(⎈|kind-myk8s:N/A) zosys@4:~/ArgoCD/resources/resources$ argocd app list
NAME              CLUSTER                         NAMESPACE  PROJECT  STATUS  HEALTH   SYNCPOLICY  CONDITIONS  REPO                                             PATH            TARGET
argocd/argocd     https://kubernetes.default.svc  argocd     default  Synced  Healthy  Auto-Prune  <none>      https://github.com/zeroone5727/my-sample-app     resources       main
argocd/guestbook  https://kubernetes.default.svc  guestbook  default  Synced  Healthy  Auto-Prune  <none>      https://github.com/argoproj/argocd-example-apps  helm-guestbook  HEAD

#백업생성 및 파일확인
(⎈|kind-myk8s:N/A) zosys@4:~/ArgoCD/resources/resources$ argocd admin export -n argocd > backup.yaml
(⎈|kind-myk8s:N/A) zosys@4:~/ArgoCD/resources/resources$ cat backup.yaml | more
apiVersion: v1
data:
  resource.customizations.ignoreResourceUpdates.ConfigMap: |
    jqPathExpressions:
      # Ignore the cluster-autoscaler status
      - '.metadata.annotations."cluster-autoscaler.kubernetes.io/last-updated"'
      # Ignore the annotation of the legacy Leases election
      - '.metadata.annotations."control-plane.alpha.kubernetes.io/leader"'
  resource.customizations.ignoreResourceUpdates.Endpoints: |
    jsonPointers:
      - /metadata
      - /subsets

 

 

추가 클러스터 생성 및 복원 작업

(⎈|kind-myk8s:N/A) zosys@4:~/ArgoCD/resources/resources$ kubectl config get-contexts
CURRENT   NAME         CLUSTER      AUTHINFO     NAMESPACE
*         kind-myk8s   kind-myk8s   kind-myk8s
(⎈|kind-myk8s:N/A) zosys@4:~/ArgoCD/resources/resources$ kind create cluster --name myk8s2 --image kindest/node:v1.32.8 --config - <<EOF
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
- role: control-plane
  extraPortMappings:
  - containerPort: 31000
    hostPort: 31000
  - containerPort: 31001
    hostPort: 31001
  - containerPort: 31002
    hostPort: 31002
  - containerPort: 31003
    hostPort: 31003
- role: worker
- role: worker
- role: worker
EOF
Creating cluster "myk8s2" ...

#wsl2환경에서 myk8s2생성시 문제가 되어 docker-desktop으로 클러스터 구현
Creating cluster "myk8s2" ...
 ✓ Ensuring node image (kindest/node:v1.32.8) 🖼
 ✗ Preparing nodes 📦 📦 📦 📦
Deleted nodes: ["myk8s2-worker3" "myk8s2-worker" "myk8s2-control-plane" "myk8s2-worker2"]
ERROR: failed to create cluster: could not find a log line that matches "Reached target .*Multi-User System.*|detected cgroup v1"
PS C:\Users\zosys> kubectl apply -f resources/namespace.yaml
namespace/argocd created
PS C:\Users\zosys> kubectl apply -f resources/install.yaml -n argocd
customresourcedefinition.apiextensions.k8s.io/applications.argoproj.io created
customresourcedefinition.apiextensions.k8s.io/applicationsets.argoproj.io created
customresourcedefinition.apiextensions.k8s.io/appprojects.argoproj.io created
serviceaccount/argocd-application-controller created
serviceaccount/argocd-applicationset-controller created
--------------------------중략------------------------------

 

 

#파워쉘내에 argocd 설치 진행 
PS C:\Users\zosys> $url = "https://github.com/argoproj/argo-cd/releases/download/" + $version + "/argocd-windows-amd64.exe"
PS C:\Users\zosys> $output = "argocd.exe"
PS C:\Users\zosys> Invoke-WebRequest -Uri $url -OutFile $output

PS C:\Users\zosys> .\argocd.exe login localhost:8081  --username admin --password "" --insecure        
'admin:login' logged in successfully
Context 'localhost:8081' updated

#현황보기
PS C:\Users\zosys> .\argocd.exe app list                                                                   
NAME  CLUSTER  NAMESPACE  PROJECT  STATUS  HEALTH  SYNCPOLICY  CONDITIONS  REPO  PATH  TARGET
PS C:\Users\zosys> .\argocd.exe cluster list                                                               
SERVER                          NAME        VERSION  STATUS   MESSAGE                                      
            PROJECT
https://kubernetes.default.svc  in-cluster           Unknown  Cluster has no applications and is not being monitored.

#백업파일로 복원 진행
PS C:\Users\zosys\resources> Get-Content -Raw .\backup.yaml | ..\argocd.exe admin import -n argocd -
import process started argocd
/ConfigMap argocd-cm in namespace argocd updated
/ConfigMap argocd-rbac-cm in namespace argocd updated
/ConfigMap argocd-ssh-known-hosts-cm in namespace argocd updated
/ConfigMap argocd-tls-certs-cm in namespace argocd updated
/Secret argocd-secret in namespace argocd updated
argoproj.io/Application argocd in namespace argocd created
{"level":"info","msg":"Warning: metadata.finalizers: \"resources-finalizer.argocd.argoproj.io\": prefer a domain-qualified finalizer name to avoid accidental conflicts with other finalizer writers","time":"2025-11-04T00:10:51+09:00"}
argoproj.io/Application guestbook in namespace argocd created
Import process comple

#복원 확인
PS C:\Users\zosys> .\argocd.exe app list                                                                   
NAME              CLUSTER                         NAMESPACE  PROJECT  STATUS     HEALTH       SYNCPOLICY  CONDITIONS  REPO                                             PATH            TARGET
argocd/argocd     https://kubernetes.default.svc  argocd     default  OutOfSync  Degraded     Auto-Prune  <none>      https://github.com/zeroone5727/my-sample-app     resources       main
argocd/guestbook  https://kubernetes.default.svc  guestbook  default  Synced     Progressing  Auto-Prune  <none>      https://github.com/argoproj/argocd-example-apps  helm-guestbook  HEAD
PS C:\Users\zosys> .\argocd.exe cluster list                                                               
SERVER                          NAME        VERSION  STATUS      MESSAGE  PROJECT
https://kubernetes.default.svc  in-cluster  1.31     Successful
#리소스 삭제 
(⎈|kind-myk8s:N/A) zosys@4:~/ArgoCD/resources/resources$ kind delete cluster --name myk8s
Deleting cluster "myk8s" ...

가시다님 및 운영진분들께서 운영하시는 클라우드닷넷의 이번주차 스터디는 Jenkins + ArgoCD로 스터디에서는 다음 구성으로 스터디를 진행하였다.

 

5년전쯤 CI도구로 Jenkins를 사용했었는데 플러그인 관리에 상당히 애를 먹었던 기억이 있다. 그 이후로 GitLab CI를 주로 사용하고 있다. 따라서 이번 실습에서 나는 현재 사용중인 GitLab CI를 활용해서 실무 환경을 소개하며 스터디를 진행했고, 구성도는 다음과 같다.

 

 

 

GitLab 파이프라인의 핵심 구성 요소

Pipeline (파이프라인)

  • 전체 CI/CD 프로세스를 의미
  • 여러 개의 Stage로 구성

Stage (스테이지)

  • 파이프라인의 논리적 단계
  • 순차적으로 실행됨
  • 예: build → test → deploy

Job (잡)

  • 실제로 실행되는 작업 단위
  • 각 Job은 특정 Stage에 속함
  • 같은 Stage의 Job들은 병렬로 실행

Runner (러너)

  • Job을 실행하는 에이전트
  • Shared Runner 또는 Specific Runner 사용 가능
  • Docker, Shell, Kubernetes 등 다양한 Executor 지원
 

gitlab-ci.yml 파일 구조

프로젝트 루트에 .gitlab-ci.yml 파일을 생성하면 자동으로 CI/CD가 활성화된다.

기본 구조

# Stage 정의
stages:
  - build
  - test
  - deploy

# Job 정의
job_name:
  stage: stage_name
  script:
    - command1
    - command2

최소 구성 예제

stages:
  - build

build-job:
  stage: build
  script:
    - echo "Hello, GitLab CI!"

핵심 문법 요소

Stages (스테이지)

파이프라인의 실행 순서를 정의한다.

stages:
  - build
  - test
  - deploy
  • 정의한 순서대로 순차 실행
  • 한 Stage의 모든 Job이 성공해야 다음 Stage로 진행
  • 정의하지 않으면 기본값: build, test, deploy

Jobs (잡)

 
job_name:
  stage: test
  script:
    - npm install
    - npm test


Job 이름 규칙

  • 영문자, 숫자, 하이픈(-), 언더스코어(_) 사용 가능
  • .으로 시작하면 숨겨진 Job (템플릿용)

Script (스크립트)

실행할 명령어를 정의한다.

 
job_name:
  script:
    - echo "Starting build"
    - docker build -t myapp .
    - docker push myapp:latest


멀티라인 스크립트

job_name:
  script:
    - |
      if [ "$CI_COMMIT_BRANCH" == "main" ]; then
        echo "Deploying to production"
      else
        echo "Deploying to staging"
      fi

Image (이미지)

Docker 이미지를 지정한다.

# 전역 이미지
image: node:18

# Job별 이미지
build-job:
  image: node:18
  script:
    - npm install

test-job:
  image: python:3.9
  script:
    - pytest

Before Script / After Script

# 모든 Job 전후에 실행
before_script:
  - echo "Setting up environment"

after_script:
  - echo "Cleaning up"

# Job별 설정
test-job:
  before_script:
    - npm install
  script:
    - npm test
  after_script:
    - rm -rf node_modules

Variables (변수)

# 전역 변수
variables:
  DATABASE_URL: "postgres://localhost/db"
  DEPLOY_ENV: "staging"

# Job별 변수
deploy-job:
  variables:
    DEPLOY_ENV: "production"
  script:
    - echo "Deploying to $DEPLOY_ENV"

GitLab 제공 기본 변수

script:
  - echo "Branch: $CI_COMMIT_BRANCH"
  - echo "Commit SHA: $CI_COMMIT_SHA"
  - echo "Project Name: $CI_PROJECT_NAME"
  - echo "Pipeline ID: $CI_PIPELINE_ID"

Only / Except (실행 조건)

# 특정 브랜치에서만 실행
deploy-production:
  stage: deploy
  script:
    - ./deploy.sh
  only:
    - main

# 특정 브랜치 제외
test-job:
  stage: test
  script:
    - npm test
  except:
    - main

Rules (고급 조건)

only/except보다 강력한 조건 설정이다.

deploy-job:
  stage: deploy
  script:
    - ./deploy.sh
  rules:
    - if: '$CI_COMMIT_BRANCH == "main"'
      when: always
    - if: '$CI_COMMIT_BRANCH == "develop"'
      when: manual
    - when: never

Artifacts (아티팩트)

Job 간 파일을 공유하거나 다운로드 가능하게 한다.

build-job:
  stage: build
  script:
    - npm run build
  artifacts:
    paths:
      - dist/
    expire_in: 1 week

deploy-job:
  stage: deploy
  script:
    - ls dist/  # build-job의 artifacts 사용 가능

Artifacts 옵션

artifacts:
  paths:
    - build/
    - dist/
  exclude:
    - "*.log"
  expire_in: 30 days
  when: on_success  # on_success, on_failure, always

Cache (캐시)

의존성 등을 캐싱하여 빌드 속도를 개선한다.

# 전역 캐시
cache:
  paths:
    - node_modules/
  key: $CI_COMMIT_REF_SLUG

# Job별 캐시
test-job:
  cache:
    key: npm-cache
    paths:
      - node_modules/
  script:
    - npm install
    - npm test

Cache vs Artifacts

  • Cache: 빌드 속도 향상 목적, Job 간 공유 불보장
  • Artifacts: Job 간 파일 전달, 다운로드 가능

Dependencies (의존성)

특정 Job의 artifacts만 사용한다.

build-app:
  stage: build
  script:
    - npm run build
  artifacts:
    paths:
      - dist/

build-docs:
  stage: build
  script:
    - npm run docs
  artifacts:
    paths:
      - docs/

deploy:
  stage: deploy
  dependencies:
    - build-app  # build-app의 artifacts만 다운로드
  script:
    - deploy dist/

Services (서비스)

데이터베이스 등의 보조 컨테이너를 실행한다.

test-job:
  image: node:18
  services:
    - postgres:14
    - redis:latest
  variables:
    POSTGRES_DB: test_db
    POSTGRES_USER: user
    POSTGRES_PASSWORD: password
  script:
    - npm test

When (실행 시점)

deploy-manual:
  stage: deploy
  script:
    - ./deploy.sh
  when: manual  # 수동 실행

cleanup-on-failure:
  stage: cleanup
  script:
    - ./cleanup.sh
  when: on_failure  # 실패 시에만 실행

when 옵션

  • on_success: 이전 Stage 성공 시 (기본값)
  • on_failure: 이전 Stage 실패 시
  • always: 항상 실행
  • manual: 수동 트리거
  • delayed: 지연 실행

 

stage, job을 활용한 기본 스크립트는 다음과 같다.

stages:
  - prepare
  - build
  - test
  - deploy

# ==================== PREPARE STAGE ====================
setup-environment:              # Job 이름: setup-environment
  stage: prepare                # 이 Job은 prepare Stage에 속함
  script:
    - echo "환경 설정 중..."

# ==================== BUILD STAGE ====================
build-frontend:                 # Job 이름: build-frontend
  stage: build                  # 이 Job은 build Stage에 속함
  script:
    - echo "프론트엔드 빌드 중..."

build-backend:                  # Job 이름: build-backend
  stage: build                  # 이 Job은 build Stage에 속함
  script:
    - echo "백엔드 빌드 중..."

build-docker-image:             # Job 이름: build-docker-image
  stage: build                  # 이 Job은 build Stage에 속함
  script:
    - echo "Docker 이미지 빌드 중..."

# ==================== TEST STAGE ====================
unit-test:                      # Job 이름: unit-test
  stage: test                   # 이 Job은 test Stage에 속함
  script:
    - echo "단위 테스트 실행 중..."

integration-test:               # Job 이름: integration-test
  stage: test                   # 이 Job은 test Stage에 속함
  script:
    - echo "통합 테스트 실행 중..."

security-scan:                  # Job 이름: security-scan
  stage: test                   # 이 Job은 test Stage에 속함
  script:
    - echo "보안 스캔 중..."

# ==================== DEPLOY STAGE ====================
deploy-staging:                 # Job 이름: deploy-staging
  stage: deploy                 # 이 Job은 deploy Stage에 속함
  script:
    - echo "스테이징 배포 중..."
  only:
    - develop

deploy-production:              # Job 이름: deploy-production
  stage: deploy                 # 이 Job은 deploy Stage에 속함
  script:
    - echo "프로덕션 배포 중..."
  when: manual
  only:
    - main
태그를 안넣어서 stuck 상태

 

 

GitLab Runner 파드외에 추가로 파이프라인용 파드가 실행되는것을 볼 수 있다.

 

파이프라인의 해당 스테이지별로 파드가 실행됨

 

그렇다면 컨테이너 이미지 저장소 및 Manifest는 어떻게 지정할까 ?

 

위와 같이 파이프라인 스크립트 내에 변수로 지정할 수 있다. 다만 이렇게 설정할 경우 변수가 유출 될 수 있다. 특히 프라이빗 컨테이너 저장소 자격증명 토큰은 절대 파이프라인에 직접 저장하면 안된다.

이때는 GitLab의 그룹 변수, 또는 프로젝트 변수를 활용할 수 있다.

 

 

자, 파이프라인을 활용하여 빌드를 진행해보자.

include:
  - project: 'devops/gitlab-ci-template'
    ref: main
    file:
      - 'ecr-build.yml'
      - 'manifest-update.yml'
      - 'slack-notification.yml'

stages:
  - build
  - update-manifest

variables:
  # ECR 레지스트리 URL
  DEV_ECR_REGISTRY_URL: "178522123123.dkr.ecr.ap-northeast-2.amazonaws.com/test-sabo"
  PROD_ECR_REGISTRY_URL: "178522123123.dkr.ecr.ap-northeast-2.amazonaws.com/live-test-sabo"
  
  # 매니페스트 리포지토리 & 파일 경로
  MANIFEST_REPO_URL: "https://gitlab.santander.co.kr/devops/k8s-manifest.git"
  DEV_MANIFEST_FILE_PATH: "dev/test-sabo/2-ro-test-sabo.yaml"
  PROD_MANIFEST_FILE_PATH: "live/test-sabo/2-ro-test-sabo.yaml"
  
  # Slack Webhook URL
  SLACK_WEBHOOK_URL: "https://hooks.slack.com/services/T0C0TQGTE/123DBQND2EM/123h84123jyW123123AJXN0Me"

# ----------------------------------------------------------
# Build: dev → DEV_ECR_REGISTRY_URL / main → PROD_ECR_REGISTRY_URL
# + Build 실패 시 Slack 알림
# ----------------------------------------------------------
build-image:
  extends: .ecr_build_template
  rules:
    - if: '$CI_COMMIT_BRANCH == "dev"'
      variables:
        ECR_REGISTRY_URL: $DEV_ECR_REGISTRY_URL
        ENVIRONMENT: "dev"
    - if: '$CI_COMMIT_BRANCH == "main"'
      variables:
        ECR_REGISTRY_URL: $PROD_ECR_REGISTRY_URL
        ENVIRONMENT: "prod"
  after_script:
    - !reference [.slack_notification_build_failed, after_script]

# ----------------------------------------------------------
# Manifest: dev → DEV_MANIFEST_FILE_PATH / main → PROD_MANIFEST_FILE_PATH
# + 성공/실패 Slack 알림
# ----------------------------------------------------------
update-manifest-image:
  extends: .manifest_update_template
  rules:
    - if: '$CI_COMMIT_BRANCH == "dev"'
      variables:
        ENV_NAME: "dev"
        TARGET_MANIFEST_FILE: "$DEV_MANIFEST_FILE_PATH"
    - if: '$CI_COMMIT_BRANCH == "main"'
      variables:
        ENV_NAME: "prod"
        TARGET_MANIFEST_FILE: "$PROD_MANIFEST_FILE_PATH"
  after_script:
    - !reference [.slack_notification_deploy_success, after_script]
    - !reference [.slack_notification_manifest_failed, after_script]

 

빌드 구간 스크립트는 위와 같다. 근데 실제 빌드 명령어는 어디에도 없다.

이는 include로 다른 파이프라인을 참조시키고, 이를 extends와 refrence 기능으로 연결해줬기 때문이다. 중복적으로 사용되는 스크립트를 각각의 레포지토리마다 적어줄 필요 없이 이와같이 사용하면 좋다. 

extend: 해당 스크립트 통으로 사용시

reference: 해당 스크립트 내 일부 스테이지 사용시

 

자 그럼 실제 파이프라인의 빌드(include중 ecr-build) 구간을 살펴보자.

# ECR Docker 빌드 템플릿
# 사용법: .gitlab-ci.yml에서 include하여 사용

.ecr_build_template:
  stage: build
  image: moby/buildkit:master
  tags: [devops]
  variables:
    DOCKER_HOST: ""
    AWS_DEFAULT_REGION: "ap-northeast-2"
  before_script:
    - apk add --no-cache aws-cli docker-cli tzdata curl jq git
    - |
      set -e
      : "${ECR_REGISTRY_URL:?ECR_REGISTRY_URL not set}"
      : "${ENVIRONMENT:?ENVIRONMENT not set}"
      ECR_REGISTRY_HOST="$(echo "$ECR_REGISTRY_URL" | cut -d/ -f1)"
      aws ecr get-login-password --region "$AWS_DEFAULT_REGION" \
        | docker login --username AWS --password-stdin "$ECR_REGISTRY_HOST"
  script:
    - |
      set -e
      IMAGE_TAG="${ENVIRONMENT}-$(TZ="Asia/Seoul" date +'%Y%m%d-%H%M%S')"
      FULL_IMAGE_NAME="${ECR_REGISTRY_URL}:${IMAGE_TAG}"

      buildctl-daemonless.sh build \
        --frontend dockerfile.v0 \
        --local context=. \
        --local dockerfile=. \
        --opt build-arg:SYSTEM=aws \
        --opt build-arg:ENVIRONMENT=${ENVIRONMENT} \
        --output type=image,name=${FULL_IMAGE_NAME},push=true

      echo "FULL_IMAGE_NAME=${FULL_IMAGE_NAME}" > build.env
      echo "ENVIRONMENT=${ENVIRONMENT}" >> build.env
  artifacts:
    reports:
      dotenv: build.env

# dev 브랜치용 빌드
.ecr_build_dev:
  extends: .ecr_build_template
  rules:
    - if: '$CI_COMMIT_BRANCH == "dev"'
      variables:
        ECR_REGISTRY_URL: $DEV_ECR_REGISTRY_URL
        ENVIRONMENT: "dev"

# main/prod 브랜치용 빌드
.ecr_build_prod:
  extends: .ecr_build_template
  rules:
    - if: '$CI_COMMIT_BRANCH == "main"'
      variables:
        ECR_REGISTRY_URL: $PROD_ECR_REGISTRY_URL
        ENVIRONMENT: "prod"

 

주요 동작:

  1. 베이스 템플릿 (.ecr_build_template): BuildKit을 사용해 Docker 이미지를 빌드하고 ECR에 푸시
  2. 환경 설정: AWS CLI로 ECR 로그인 후, 환경별(dev/prod) 이미지 태그 자동 생성 (예: dev-20241031-143022)
  3. 브랜치별 분기: dev 브랜치는 개발 ECR로, main 브랜치는 프로덕션 ECR로 자동 배포
  4. 아티팩트 전달: 빌드된 이미지 정보를 build.env 파일로 저장해 다음 Job에서 사용 가능

 여기서 이미지 태그는 시간으로만 지정 돼 있는 상태인데, 깃랩(GitLab) 파이프라인에서 기본 제공되는 커밋 SHA 변수는 도커 이미지 태그로 가장 널리 사용되는값으로 대표적으로 사용되는 변수는 다음과 같다.

- CI_COMMIT_SHA : 현재 파이프라인을 실행한 전체 커밋 SHA(길이 40)
- CI_COMMIT_SHORT_SHA : 전체 SHA의 앞 8자리 등, 짧은 형태

이 변수들은 도커 이미지의 태그로 바로 사용할 수 있어, 소스와 이미지의 트레이싱 및 일관성 관리에 적합하다.

 

예시

variables:
  IMAGE_TAG: $CI_COMMIT_SHORT_SHA

build:
  stage: build
  image: docker:latest
  services:
    - docker:dind
  script:
    - docker build -t my-app:${IMAGE_TAG} .
    - docker push my-app:${IMAGE_TAG}

 

여튼, 이런 방식으로 이미지를 빌드하고, 컨테이너 레지스트리로 이미지를 푸시한다. 

위 플로우에서 2, 3번 단계가 진행됐다고 보면 된다.

자, 이제 빌드가 끝났으니 다음으로 쿠버네티스에 적용될 manifest 업데이트 구간(플로우의 4번)을 살펴보자

 

테스트 레포지토리의 내용은 앞서 build 구간처럼 include, extends를 사용하여 별다른 내용이 없다.

실제 manifest-update 스크립트의 내용은 다음과 같다.

 

# Manifest 업데이트 템플릿
# 사용법: .gitlab-ci.yml에서 include하여 사용

.manifest_update_template:
  stage: update-manifest
  image: alpine:3.18
  tags: [devops]
  before_script:
    - apk add --no-cache git curl jq
    - git config --global user.email "gitlab-ci@test-sabo.co.kr"
    - git config --global user.name "GitLab CI/CD"
  script:
    - |
      set -e
      : "${FULL_IMAGE_NAME:?FULL_IMAGE_NAME not provided from build step}"
      : "${TARGET_MANIFEST_FILE:?TARGET_MANIFEST_FILE not set}"
      : "${MANIFEST_REPO_URL:?MANIFEST_REPO_URL not set}"

      git clone "https://oauth2:${GIT_ACCESS_TOKEN}@${MANIFEST_REPO_URL#https://}"
      MANIFEST_DIR="$(basename "${MANIFEST_REPO_URL}" .git)"
      cd "${MANIFEST_DIR}"

      # Busybox sed 호환: 첫 번째 image: 라인만 교체(들여쓰기 보존)
      sed -i -e "1,/^[[:space:]]*image:[[:space:]]*/s#^\([[:space:]]*image:[[:space:]]*\).*#\1${FULL_IMAGE_NAME}#" "$TARGET_MANIFEST_FILE"

      git add "$TARGET_MANIFEST_FILE"
      git commit -m "Deploy: Update image to ${FULL_IMAGE_NAME} for ${CI_PROJECT_PATH}" || echo "No changes to commit, skipping push."
      git push origin main

      echo "Manifest update successful. ArgoCD will now sync the changes..."

# dev 브랜치용 Manifest 업데이트
.manifest_update_dev:
  extends: .manifest_update_template
  rules:
    - if: '$CI_COMMIT_BRANCH == "dev"'
      variables:
        ENV_NAME: "dev"
        TARGET_MANIFEST_FILE: "$DEV_MANIFEST_FILE_PATH"

# main/prod 브랜치용 Manifest 업데이트
.manifest_update_prod:
  extends: .manifest_update_template
  rules:
    - if: '$CI_COMMIT_BRANCH == "main"'
      variables:
        ENV_NAME: "prod"
        TARGET_MANIFEST_FILE: "$PROD_MANIFEST_FILE_PATH"

 

주요 동작:
1. 이전 Job 연계: 빌드 단계에서 생성된 FULL_IMAGE_NAME (이미지 정보)을 받아서 사용
2. Manifest 저장소 클론: 별도의 Git 저장소에서 Kubernetes 배포 매니페스트를 가져옴
3. 이미지 태그 교체: sed 명령어로 YAML 파일 내 image: 필드를 새 이미지로 자동 변경
4. 변경사항 커밋 & 푸시: 업데이트된 매니페스트를 Git 저장소에 자동 커밋
5. ArgoCD 연동: Manifest가 업데이트되면 ArgoCD가 자동으로 감지해 Kubernetes 클러스터에 배포
ps. 브랜치별 동작: dev 브랜치는 개발 매니페스트, main 브랜치는 프로덕션 매니페스트 업데이트

 

보는것처럼 Manifest 모음 레포지토리가 있고, 이를 파이프라인을 실행하기 위해 생성된 파드에 git clone 후 해당 이미지 태그 부분만 sed로 교체하고 git push 하는 방식이다.

 

실제 수정되는 rollout manifest는 다음과 같다.

apiVersion: argoproj.io/v1alpha1
kind: Rollout
metadata:
  name: test-sabo
  namespace: test-sabo
  labels:
    app: test-sabo
    version: v1
spec:
  replicas: 1
  revisionHistoryLimit: 3
  selector:
    matchLabels:
      app: test-sabo
  template:
    metadata:
      labels:
        app: test-sabo
        version: v1
    spec:
      containers:
      - name: test-sabo
        image: 178512310749.dkr.ecr.ap-northeast-2.amazonaws.com/dev-test-sabo:dev-20251028-180217
        imagePullPolicy: Always
        ports:
        - containerPort: 9080

        env:
        - name: TZ
          value: Asia/Seoul
        - name: DB_TYPE
          value: "mysql"
        - name: MYSQL_DB_HOST
          valueFrom:
            secretKeyRef:
              name: mysql-auth
              key: MYSQL_DB_HOST
        - name: MYSQL_DB_PORT
          valueFrom:
            secretKeyRef:
              name: mysql-auth
              key: MYSQL_DB_PORT
        - name: MYSQL_DB_USER
          valueFrom:
            secretKeyRef:
              name: mysql-auth
              key: MYSQL_DB_USER
        - name: MYSQL_DB_PASSWORD
          valueFrom:
            secretKeyRef:
              name: mysql-auth
              key: MYSQL_DB_PASSWORD
        - name: MYSQL_DB_NAME
          valueFrom:
            secretKeyRef:
              name: mysql-auth
              key: MYSQL_DB_NAME

        resources:
          limits:
            memory: 2048Mi
            #cpu: 500m
          requests:
            cpu: 300m
            memory: 1024Mi
      restartPolicy: Always
      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
              - matchExpressions:
                  - key: role
                    operator: In
                    values:
                      - service
      tolerations:
        - key: "role"
          operator: "Equal"
          value: "service"
          effect: "NoSchedule"
  strategy:
    blueGreen:
      activeService: test-sabo-svc
      previewService: test-sabo-preview-svc
      autoPromotionEnabled: true

 

manifest 업데이트 스테이지가 실행되면(아래 플로우의 4번)

manifest가 업데이트 되고, 설치돼있던 argocd에서 manifest git 저장소가 변경될 때 sync되어 신규 서비스가 배포되는 방식(현재 구성에서는 블루/그린 배포)이다.

 

자 그럼 이제 argocd가 manifest를 sync하여 실제 쿠버네티스에 배포 될 수 있도록 아르고시디에서 프로젝트, 애플리케이션을 등록해보자.

 

레포지토리 등록

 

 

argocd를 통해 배포된 리소스들을 확인할 수 있다.

쿠버네티스에서 확인된 실제 배포 상태.

 

 

자, 이제 그러면 처음부터 끝까지, 즉 코드 푸시부터 배포까지 진행해보자. 이번에 진행할때는 앞서 구축한 파이프라인 외에 이미지 태그를 디비로 관리하는것까지 추가로 설명한다.

 

1. 코드 푸시

 

 

2. 파이프라인 실행

 

2-1) 파이프라인 실행 상세 내용

a) 보는것처럼 .pre 스테이지에서 사전 검증을 진행하고 build / deploy를 진행한다. build/deploy 방식은 앞서 설명한것과 동일하다.

b)아르고시디의 실제 배포된 화면이다.

배포전 이미지 태그이다.

 

 

파이프라인이 실행되면서 신규 파드가 배포되는 상태이다.

 

파드가 정상적으로 종료되고 신규 파드만 남은 상태이다.

변경된 이미지 태그

 

자 그럼 이미지 태그 관리는 어떻게 할까 ?

현재 시스템에서는 go로 짜여진 파이프라인용 허브 서비스가 있고 파이프라인 트리거를 파이프라인 허브 서비스를 통해 진행되고 있다.

 

플로우

 

 

services/common/version_controller.go - 버전 컨트롤 로직
ControlUpdateVersion() 함수로 major.minor.feature 형식의 시맨틱 버저닝 관리
업데이트 타입에 따라 버전 자동 증가

 

routes/interact.go - 버전 생성 및 적용
Build: ControlUpdateVersion()으로 버전 증가
Rebuild: 기존 버전에 -rebuild-YYMMDD-HHMM 형식 추가

services/query/write_build_info.go - 버전 정보 DB 저장
WriteBuildUpdateHistory(): 일반 빌드 버전 저장
WriteRebuildUpdateHistory(): 리빌드 버전 저장

 

실제 저장되는 태그 디비는 다음과 같다.

 

 

 

 

5장 헬름

헬름이란 ?

  1. 커스터마이즈와 유사하지만 템플릿 기반 솔루션
  2. 커스터마이즈와 헬름의 차이점 중 하나는 chart 개념
  3. 패키지 관리자처럼 동작하여 버전관리, 공유, 배포 가능 Artifact를 생성
  4. chart는 공유 가능한 쿠버네티스 패지리
  5. 헬름은 애플리케이션의 ConfigMap이 변경되면 자동으로 롤링 업데이트가 시작되도록 하는 기능 몇가지를 제공

5.1 Creating a Helm Project

 

실습을 위한 kind ( k8s ) 배포

# kind create cluster --name myk8s --image kindest/node:v1.32.8 --config - <<EOF
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
- role: control-plane
  extraPortMappings:
  - containerPort: 30000
    hostPort: 30000
  - containerPort: 30001
    hostPort: 30001
EOF

Creating cluster "myk8s" ...
 ✓ Ensuring node image (kindest/node:v1.32.8) 🖼
 ✓ Preparing nodes 📦
 ✓ Writing configuration 📜
 ✓ Starting control-plane 🕹️
 ✓ Installing CNI 🔌
 ✓ Installing StorageClass 💾
Set kubectl context to "kind-myk8s"
You can now use your cluster with:

kubectl cluster-info --context kind-myk8s

Have a question, bug, or feature request? Let us know! https://kind.sigs.k8s.io/#community 🙂

 

 

헬름 챠트 디렉터리 레이아웃 생성

# 헬름 챠트 디렉터리 레이아웃 생성
mkdir pacman
mkdir pacman/templates
cd pacman

# 루트 디렉토리에 차트 정의 파일 작성 : 버전, 이름 등 정보
cat << EOF > Chart.yaml
apiVersion: v2
name: pacman
description: A Helm chart for Pacman
type: application
version: 0.1.0        # 차트 버전, 차트 정의가 바뀌면 업데이트한다
appVersion: "1.0.0"   # 애플리케이션 버전
EOF

# templates 디렉터리에 Go 템플릿 언어와 Sprig 라이브러리의 템플릿 함수를 사용해 정의한 배포 템플릿 파일 작성 : 애플리케이션 배포
## deployment.yaml 파일에서 템플릿화 : dp 이름, app 버전, replicas 수, 이미지/태그, 이미지 풀 정책, 보안 컨텍스트, 포트 
cat << EOF > templates/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: {{ .Chart.Name}}            # Chart.yaml 파일에 설정된 이름을 가져와 설정
  labels:
    app.kubernetes.io/name: {{ .Chart.Name}}
    {{- if .Chart.AppVersion }}     # Chart.yaml 파일에 appVersion 여부에 따라 버전을 설정
    app.kubernetes.io/version: {{ .Chart.AppVersion | quote }}     # appVersion 값을 가져와 지정하고 따움표 처리
    {{- end }}
spec:
  replicas: {{ .Values.replicaCount }}     # replicaCount 속성을 넣을 자리 placeholder
  selector:
    matchLabels:
      app.kubernetes.io/name: {{ .Chart.Name}}
  template:
    metadata:
      labels:
        app.kubernetes.io/name: {{ .Chart.Name}}
    spec:
      containers:
        - image: "{{ .Values.image.repository }}:{{ .Values.image.tag | default .Chart.AppVersion}}"   # 이미지 지정 placeholder, 이미지 태그가 있으 면 넣고, 없으면 Chart.yaml에 값을 설정
          imagePullPolicy: {{ .Values.image.pullPolicy }}
          securityContext:
            {{- toYaml .Values.securityContext | nindent 14 }} # securityContext의 값을 YAML 객체로 지정하며 14칸 들여쓰기
          name: {{ .Chart.Name}}
          ports:
            - containerPort: {{ .Values.image.containerPort }}
              name: http
              protocol: TCP
EOF

# service.yaml 파일에서 템플릿화 : service 이름, 컨테이너 포트
minji  ~/Desktop/work/Gasida_series/practice/pacman  cat << EOF > templates/service.yaml
apiVersion: v1
kind: Service
metadata:
  labels:
    app.kubernetes.io/name: {{ .Chart.Name }}
  name: {{ .Chart.Name }}
spec:
  ports:
    - name: http
      port: {{ .Values.image.containerPort }}
      targetPort: {{ .Values.image.containerPort }}
  selector:
    app.kubernetes.io/name: {{ .Chart.Name }}
EOF

# 차트 기본값 default values이 담긴 파일 작성 : 애플리케이션 배포 시점에 다른 값으로 대체될 수 있는, 기본 설정을 담아두는 곳
minji  ~/Desktop/work/Gasida_series/practice/pacman  cat << EOF > values.yaml
image:     # image 절 정의
  repository: quay.io/gitops-cookbook/pacman-kikd
  tag: "1.0.0"
  pullPolicy: Always
  containerPort: 8080

replicaCount: 1
securityContext: {}     # securityContext 속성의 값을 비운다
EOF


# 디렉터리 레이아웃 확인
minji  ~/Desktop/work/Gasida_series/practice/pacman  tree
.
├── Chart.yaml
├── templates
│   ├── deployment.yaml
│   └── service.yaml
└── values.yaml

 

 

헬름 차트를 로컬에서 Yaml로 렌더링

minji  ~/Desktop/work/Gasida_series/practice/pacman  helm template .
---
# Source: pacman/templates/service.yaml
apiVersion: v1
kind: Service
metadata:
  labels:
    app.kubernetes.io/name: pacman
  name: pacman
spec:
  ports:
    - name: http
      port: 8080
      targetPort: 8080
  selector:
    app.kubernetes.io/name: pacman
---
# Source: pacman/templates/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: pacman            # Chart.yaml 파일에 설정된 이름을 가져와 설정
  labels:
    app.kubernetes.io/name: pacman     # Chart.yaml 파일에 appVersion 여부에 따라 버전을 설정
    app.kubernetes.io/version: "1.0.0"     # appVersion 값을 가져와 지정하고 따움표 처리
spec:
  replicas: 1     # replicaCount 속성을 넣을 자리 placeholder
  selector:
    matchLabels:
      app.kubernetes.io/name: pacman
  template:
    metadata:
      labels:
        app.kubernetes.io/name: pacman
    spec:
      containers:
        - image: "quay.io/gitops-cookbook/pacman-kikd:1.0.0"   # 이미지 지정 placeholder, 이미지 태그가 있으면 넣고, 없으면 Chart.yaml에 값을 설정
          imagePullPolicy: Always
          securityContext:
              {} # securityContext의 값을 YAML 객체로 지정하며 14칸 들여쓰기
          name: pacman
          ports:
            - containerPort: 8080
              name: http
              protocol: TCP
              
# --set 파라미터를 사용하여 기본값을 재정의
minji  ~/Desktop/work/Gasida_series/practice/pacman  helm template --set replicaCount=3 .
---
# Source: pacman/templates/service.yaml
.
spec:
  replicas: 3     # replicaCount 속성을 넣을 자리 placeholder
.

 

 

해당 챠트를 kind ( k8s )배포 및 helm 확인

# 해당 차트 배포
minji  ~/Desktop/work/Gasida_series/practice/pacman  helm install pacman .
NAME: pacman
LAST DEPLOYED: Thu Oct 23 22:27:43 2025
NAMESPACE: default
STATUS: deployed
REVISION: 1
TEST SUITE: None

minji  ~/Desktop/work/Gasida_series/practice/pacman  helm list
NAME  	NAMESPACE	REVISION	UPDATED                             	STATUS  	CHART       	APP VERSION
pacman	default  	1       	2025-10-23 22:27:43.064742 +0900 KST	deployed	pacman-0.1.0	1.0.0

# 배포된 리소스 확인
minji  ~/Desktop/work/Gasida_series/practice/pacman  kubectl get deploy,pod,svc,ep
NAME                     READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/pacman   1/1     1            1           108s

NAME                          READY   STATUS    RESTARTS   AGE
pod/pacman-576769bb86-pqkg7   1/1     Running   0          108s

NAME                 TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)    AGE
service/kubernetes   ClusterIP   10.96.0.1       <none>        443/TCP    18m
service/pacman       ClusterIP   10.96.184.154   <none>        8080/TCP   108s

NAME                   ENDPOINTS           AGE
endpoints/kubernetes   192.168.97.2:6443   18m
endpoints/pacman       10.244.0.5:8080     108s

minji  ~/Desktop/work/Gasida_series/practice/pacman  kubectl get pod -o yaml | kubectl neat | yq  # kubectl krew install neat
apiVersion: v1
items:
  - apiVersion: v1
    kind: Pod
    metadata:
      labels:
        app.kubernetes.io/name: pacman
        pod-template-hash: 576769bb86
      name: pacman-576769bb86-pqkg7
      namespace: default
    spec:
      containers:
        - image: quay.io/gitops-cookbook/pacman-kikd:1.0.0
          imagePullPolicy: Always
          name: pacman
          ports:
            - containerPort: 8080
              name: http
          volumeMounts:
            - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
              name: kube-api-access-dzgc2
              readOnly: true
      preemptionPolicy: PreemptLowerPriority
      priority: 0
      serviceAccountName: default
      tolerations:
        - effect: NoExecute
          key: node.kubernetes.io/not-ready
          operator: Exists
          tolerationSeconds: 300
        - effect: NoExecute
          key: node.kubernetes.io/unreachable
          operator: Exists
          tolerationSeconds: 300
      volumes:
        - name: kube-api-access-dzgc2
          projected:
            sources:
              - serviceAccountToken:
                  expirationSeconds: 3607
                  path: token
              - configMap:
                  items:
                    - key: ca.crt
                      path: ca.crt
                  name: kube-root-ca.crt
              - downwardAPI:
                  items:
                    - fieldRef:
                        fieldPath: metadata.namespace
                      path: namespace
kind: List
metadata: {}

minji  ~/Desktop/work/Gasida_series/practice/pacman  kubectl get pod -o json | grep securityContext -A1
                        "securityContext": {},
                        "terminationMessagePath": "/dev/termination-log",
--
                "securityContext": {},
                "serviceAccount": "default",
                
##
minji  ~/Desktop/work/Gasida_series/practice/pacman  helm history pacman
REVISION	UPDATED                 	STATUS  	CHART       	APP VERSION	DESCRIPTION
1       	Thu Oct 23 22:27:43 2025	deployed	pacman-0.1.0	1.0.0      	Install complete


# Helm 자체가 배포 릴리스 메타데이터를 저장하기 위해 자동으로 Sercet 리소스 생성 : Helm이 차트의 상태를 복구하거나 rollback 할 때 이 데이터를 이용
minji  ~/Desktop/work/Gasida_series/practice/pacman  kubectl get secret
NAME                           TYPE                 DATA   AGE
sh.helm.release.v1.pacman.v1   helm.sh/release.v1   1      11m

 

 

업그레이드, 메타데이터 확인

##
minji  ~/Desktop/work/Gasida_series/practice/pacman  helm upgrade pacman --reuse-values --set relicaCount=2 .
Release "pacman" has been upgraded. Happy Helming!
NAME: pacman
LAST DEPLOYED: Thu Oct 23 22:39:50 2025
NAMESPACE: default
STATUS: deployed
REVISION: 2
TEST SUITE: None

##
minji  ~/Desktop/work/Gasida_series/practice/pacman  kubectl get pod
NAME                      READY   STATUS    RESTARTS   AGE
pacman-576769bb86-pqkg7   1/1     Running   0          12m

# helm 배포 정보 확인
#-1. 모든 정보
minji  ~/Desktop/work/Gasida_series/practice/pacman  helm get all pacman
NAME: pacman
LAST DEPLOYED: Thu Oct 23 22:39:50 2025
NAMESPACE: default
STATUS: deployed
REVISION: 2
CHART: pacman
VERSION: 0.1.0
APP_VERSION: 1.0.0
TEST SUITE: None
USER-SUPPLIED VALUES:
relicaCount: 2

COMPUTED VALUES:
image:
  containerPort: 8080
  pullPolicy: Always
  repository: quay.io/gitops-cookbook/pacman-kikd
  tag: 1.0.0
relicaCount: 2
replicaCount: 1
securityContext: {}

HOOKS:
MANIFEST:
---
# Source: pacman/templates/service.yaml
apiVersion: v1
kind: Service
metadata:
  labels:
    app.kubernetes.io/name: pacman
  name: pacman
spec:
  ports:
    - name: http
      port: 8080
      targetPort: 8080
  selector:
    app.kubernetes.io/name: pacman
---
# Source: pacman/templates/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: pacman            # Chart.yaml 파일에 설정된 이름을 가져와 설정
  labels:
    app.kubernetes.io/name: pacman     # Chart.yaml 파일에 appVersion 여부에 따라 버전을 설정
    app.kubernetes.io/version: "1.0.0"     # appVersion 값을 가져와 지정하고 따움표 처리
spec:
  replicas: 1     # replicaCount 속성을 넣을 자리 placeholder
  selector:
    matchLabels:
      app.kubernetes.io/name: pacman
  template:
    metadata:
      labels:
        app.kubernetes.io/name: pacman
    spec:
      containers:
        - image: "quay.io/gitops-cookbook/pacman-kikd:1.0.0"   # 이미지 지정 placeholder, 이미지 태그가 있으면 넣고, 없으면 Chart.yaml에 값을 설정
          imagePullPolicy: Always
          securityContext:
              {} # securityContext의 값을 YAML 객체로 지정하며 14칸 들여쓰기
          name: pacman
          ports:
            - containerPort: 8080
              name: http
              protocol: TCP

#-2. values 적용 정보          
minji  ~/Desktop/work/Gasida_series/practice/pacman  helm get values pacman
USER-SUPPLIED VALUES:
relicaCount: 2

#-3. 실제 적용된 manifest
minji  ~/Desktop/work/Gasida_series/practice/pacman  helm get manifest pacman
---
# Source: pacman/templates/service.yaml
apiVersion: v1
kind: Service
metadata:
  labels:
    app.kubernetes.io/name: pacman
  name: pacman
spec:
  ports:
    - name: http
      port: 8080
      targetPort: 8080
  selector:
    app.kubernetes.io/name: pacman
---
# Source: pacman/templates/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: pacman            # Chart.yaml 파일에 설정된 이름을 가져와 설정
  labels:
    app.kubernetes.io/name: pacman     # Chart.yaml 파일에 appVersion 여부에 따라 버전을 설정
    app.kubernetes.io/version: "1.0.0"     # appVersion 값을 가져와 지정하고 따움표 처리
spec:
  replicas: 1     # replicaCount 속성을 넣을 자리 placeholder
  selector:
    matchLabels:
      app.kubernetes.io/name: pacman
  template:
    metadata:
      labels:
        app.kubernetes.io/name: pacman
    spec:
      containers:
        - image: "quay.io/gitops-cookbook/pacman-kikd:1.0.0"   # 이미지 지정 placeholder, 이미지 태그가 있으면 넣고, 없으면 Chart.yaml에 값을 설정
          imagePullPolicy: Always
          securityContext:
              {} # securityContext의 값을 YAML 객체로 지정하며 14칸 들여쓰기
          name: pacman
          ports:
            - containerPort: 8080
              name: http
              protocol: TCP
              
              
# chart nodes
minji  ~/Desktop/work/Gasida_series/practice/pacman  helm get notes pacman

# 삭제 후 secret 확인
minji  ~/Desktop/work/Gasida_series/practice/pacman  helm uninstall pacman
release "pacman" uninstalled

minji  ~/Desktop/work/Gasida_series/practice/pacman  kubectl get secret
No resources found in default namespace.

 

 

5.2 Reusing Statements Between Templates

같은 코드 확인 및 재사용 가능 코드 블록 정의

# deployment.yaml, service.yaml 에 selector 필드가 동일
## deployment.yaml
spec:
  replicas: {{ .Values.replicaCount }}
  selector:
    matchLabels:
      app.kubernetes.io/name: {{ .Chart.Name}}
  template:
    metadata:
      labels:
        app.kubernetes.io/name: {{ .Chart.Name}}

## service.yaml
  selector:
    app.kubernetes.io/name: {{ .Chart.Name }}
    
## 이 필드를 업데이트하려면(selector 필드에 새 레이블 추가 등) 3곳을 똑같이 업데이트 해야함
# 템플릿 디렉터리에 _helpers.tpl 파일을 만들고 그 안에 재사용 가능한 템플릿 코드를 두어 재사용할 수 있게 기존 코드를 디렉터링하자
## _helpers.tpl 파일 작성
minji  ~/Desktop/work/Gasida_series/practice  cat << EOF > templates/_helpers.tpl
{{- define "pacman.selectorLabels" -}}   # stetement 이름을 정의
app.kubernetes.io/name: {{ .Chart.Name}} # 해당 stetement 가 하는 일을 정의
{{- end }}
EOF

## deployment.yaml 수정
spec:
  replicas: {{ .Values.replicaCount }}
  selector:
    matchLabels:
      {{- include "pacman.selectorLabels" . | nindent 6 }}   # pacman.selectorLabels를 호출한 결과를 6만큼 들여쓰기하여 주입
  template:
    metadata:
      labels:
        {{- include "pacman.selectorLabels" . | nindent 8 }} # pacman.selectorLabels를 호출한 결과를 8만큼 들여쓰기하여 주입
        
## service.yaml 수정
  selector:
    {{- include "pacman.selectorLabels" . | nindent 6 }}


minji  ~/Desktop/work/Gasida_series/practice/pacman  helm template .
---
# Source: pacman/templates/service.yaml
apiVersion: v1
kind: Service
metadata:
  labels:
    app.kubernetes.io/name: pacman
  name: pacman
spec:
  ports:
    - name: http
      port: 8080
      targetPort: 8080
  selector:
      app.kubernetes.io/name: pacman
      app.kubernetes.io/version: 1.0.0
---
# Source: pacman/templates/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: pacman            # Chart.yaml 파일에 설정된 이름을 가져와 설정
  labels:
    app.kubernetes.io/name: pacman     # Chart.yaml 파일에 appVersion 여부에 따라 버전을 설정
    app.kubernetes.io/version: "1.0.0"     # appVersion 값을 가져와 지정하고 따움표 처리
spec:
  replicas: 1     # replicaCount 속성을 넣을 자리 placeholder
  selector:
    matchLabels:
      app.kubernetes.io/name: pacman
      app.kubernetes.io/version: 1.0.0   # pacman.selectorLabels를 호출한 결과를 6만큼 들여쓰기하여 주입
  template:
    metadata:
      labels:
        app.kubernetes.io/name: pacman
        app.kubernetes.io/version: 1.0.0 # pacman.selectorLabels를 호출한 결과를 8만큼 들여쓰기하여 주입
    spec:
      containers:
        - image: "quay.io/gitops-cookbook/pacman-kikd:1.0.0"   # 이미지 지정 placeholder, 이미지 태그가 있으면 넣고, 없으면 Chart.yaml에 값을 설정
          imagePullPolicy: Always
          securityContext:
              {} # securityContext의 값을 YAML 객체로 지정하며 14칸 들여쓰기
          name: pacman
          ports:
            - containerPort: 8080
              name: http
              protocol: TCP

 

 

5.3 Updating a Container Image in Helm

 

차트 배포

# _helpers.tpl 파일 초기 설정으로 수정
minji  ~/Desktop/work/Gasida_series/practice/pacman  cat << EOF > templates/_helpers.tpl
{{- define "pacman.selectorLabels" -}}
app.kubernetes.io/name: {{ .Chart.Name}}
{{- end }}
EOF

# helm 배포
minji  ~/Desktop/work/Gasida_series/practice/pacman  helm install pacman .
NAME: pacman
LAST DEPLOYED: Thu Oct 23 23:32:37 2025
NAMESPACE: default
STATUS: deployed
REVISION: 1
TEST SUITE: None

# 확인 : 리비전 번호, 이미지 정보 확인
minji  ~/Desktop/work/Gasida_series/practice/pacman  helm history pacman
REVISION	UPDATED                 	STATUS  	CHART       	APP VERSION	DESCRIPTION
1       	Thu Oct 23 23:32:37 2025	deployed	pacman-0.1.0	1.0.0      	Install complete

minji  ~/Desktop/work/Gasida_series/practice/pacman  kubectl get deploy -owide
NAME     READY   UP-TO-DATE   AVAILABLE   AGE   CONTAINERS   IMAGES                                      SELECTOR
pacman   1/1     1            1           84s   pacman       quay.io/gitops-cookbook/pacman-kikd:1.0.0   app.kubernetes.io/name=pacman

 

 

1.1.0 으로 이미지 갱신

# values.yaml에 이미지 태그 업데이트
minji  ~/Desktop/work/Gasida_series/practice/pacman  cat << EOF > values.yaml
image:
  repository: quay.io/gitops-cookbook/pacman-kikd
  tag: "1.1.0"
  pullPolicy: Always
  containerPort: 8080

replicaCount: 1
securityContext: {}
EOF

# Chart.yaml 파일에 appVersion 필드 갱신
minji  ~/Desktop/work/Gasida_series/practice/pacman  cat << EOF > Chart.yaml
apiVersion: v2
name: pacman
description: A Helm chart for Pacman
type: application
version: 0.1.0
appVersion: "1.1.0"
EOF

# 배포 업그레이드
minji  ~/Desktop/work/Gasida_series/practice/pacman  helm upgrade pacman .
Release "pacman" has been upgraded. Happy Helming!
NAME: pacman
LAST DEPLOYED: Thu Oct 23 23:36:24 2025
NAMESPACE: default
STATUS: deployed
REVISION: 2
TEST SUITE: None

# 확인
minji  ~/Desktop/work/Gasida_series/practice/pacman  helm history pacman
REVISION	UPDATED                 	STATUS    	CHART       	APP VERSION	DESCRIPTION
1       	Thu Oct 23 23:32:37 2025	superseded	pacman-0.1.0	1.0.0      	Install complete
2       	Thu Oct 23 23:36:24 2025	deployed  	pacman-0.1.0	1.1.0      	Upgrade complete

minji  ~/Desktop/work/Gasida_series/practice/pacman  kubectl get secret
NAME                           TYPE                 DATA   AGE
sh.helm.release.v1.pacman.v1   helm.sh/release.v1   1      4m16s
sh.helm.release.v1.pacman.v2   helm.sh/release.v1   1      29s

minji  ~/Desktop/work/Gasida_series/practice/pacman  kubectl get deploy,replicaset -owide
NAME                     READY   UP-TO-DATE   AVAILABLE   AGE     CONTAINERS   IMAGES                                      SELECTOR
deployment.apps/pacman   1/1     1            1           4m30s   pacman       quay.io/gitops-cookbook/pacman-kikd:1.1.0   app.kubernetes.io/name=pacman

NAME                                DESIRED   CURRENT   READY   AGE     CONTAINERS   IMAGES                                      SELECTOR
replicaset.apps/pacman-576769bb86   0         0         0       4m30s   pacman       quay.io/gitops-cookbook/pacman-kikd:1.0.0   app.kubernetes.io/name=pacman,pod-template-hash=576769bb86
replicaset.apps/pacman-64c54b85f9   1         1         1       43s     pacman       quay.io/gitops-cookbook/pacman-kikd:1.1.0   app.kubernetes.io/name=pacman,pod-template-hash=64c54b85f9

 

 

여기서 잠깐

두 개의 개별 필드 ( tag, appVersion)대신 appVersion을 tag로도 쓰는 방법을 생각해 볼 수 있다.

버전 필드의 쓰임새, 버전 관리 전략, 소프트웨어 수명 주기에 따라 어느쪽으로 할지 정할 필요가 있다.

appVersion은 애플리케이션의 버전이므로 애플리케이션을 변경할 때 마다 업데이트가 필요

한편 version은 chart버전이기 때문에 chart정의 ( 템플릿 등 )가 변경되면 갱신한다.

따라서 두 필드는 서로 관계가 없다

 

이전 버전으로 롤백

# 이전 버전으로 롤백
minji  ~/Desktop/work/Gasida_series/practice/pacman  helm rollback pacman 1 && kubectl get pod -w
Rollback was a success! Happy Helming!
NAME                      READY   STATUS              RESTARTS   AGE
pacman-576769bb86-t5scc   0/1     ContainerCreating   0          0s
pacman-64c54b85f9-xqx2n   1/1     Running             0          2m44s
pacman-576769bb86-t5scc   1/1     Running             0          3s
pacman-64c54b85f9-xqx2n   1/1     Terminating         0          2m47s
pacman-64c54b85f9-xqx2n   0/1     Error               0          2m47s
pacman-64c54b85f9-xqx2n   0/1     Error               0          2m48s
pacman-64c54b85f9-xqx2n   0/1     Error               0          2m48s

# 확인
minji  ~/Desktop/work/Gasida_series/practice/pacman  helm history pacman
REVISION	UPDATED                 	STATUS    	CHART       	APP VERSION	DESCRIPTION
1       	Thu Oct 23 23:32:37 2025	superseded	pacman-0.1.0	1.0.0      	Install complete
2       	Thu Oct 23 23:36:24 2025	superseded	pacman-0.1.0	1.1.0      	Upgrade complete
3       	Thu Oct 23 23:39:08 2025	superseded	pacman-0.1.0	1.0.0      	Rollback to 1
4       	Thu Oct 23 23:40:12 2025	deployed  	pacman-0.1.0	1.0.0      	Rollback to 1

minji  ~/Desktop/work/Gasida_series/practice/pacman  kubectl get secret
NAME                           TYPE                 DATA   AGE
sh.helm.release.v1.pacman.v1   helm.sh/release.v1   1      10m
sh.helm.release.v1.pacman.v2   helm.sh/release.v1   1      7m3s
sh.helm.release.v1.pacman.v3   helm.sh/release.v1   1      4m19s
sh.helm.release.v1.pacman.v4   helm.sh/release.v1   1      3m15s

minji  ~/Desktop/work/Gasida_series/practice/pacman  kubectl get deploy,replicaset -owide
NAME                     READY   UP-TO-DATE   AVAILABLE   AGE   CONTAINERS   IMAGES                                      SELECTOR
deployment.apps/pacman   1/1     1            1           11m   pacman       quay.io/gitops-cookbook/pacman-kikd:1.0.0   app.kubernetes.io/name=pacman

NAME                                DESIRED   CURRENT   READY   AGE     CONTAINERS   IMAGES                                      SELECTOR
replicaset.apps/pacman-576769bb86   1         1         1       11m     pacman       quay.io/gitops-cookbook/pacman-kikd:1.0.0   app.kubernetes.io/name=pacman,pod-template-hash=576769bb86
replicaset.apps/pacman-64c54b85f9   0         0         0       7m37s   pacman       quay.io/gitops-cookbook/pacman-kikd:1.1.0   app.kubernetes.io/name=pacman,pod-template-hash=64c54b85f9

 

 

values yaml file override

# value 새 파일 작성
minji  ~/Desktop/work/Gasida_series/practice/pacman  cat << EOF > newvalues.yaml
image:
  tag: "1.2.0"
EOF

# template 명령 실행 시 새 values 파일 함께 전달 : 결과적으로 values.yaml 기본값을 사용하지만, image.tag 값은 override 함
minji  ~/Desktop/work/Gasida_series/practice/pacman  helm template pacman -f newvalues.yaml .
---
# Source: pacman/templates/service.yaml
apiVersion: v1
kind: Service
metadata:
  labels:
    app.kubernetes.io/name: pacman
  name: pacman
spec:
  ports:
    - name: http
      port: 8080
      targetPort: 8080
  selector:
      app.kubernetes.io/name: pacman
---
# Source: pacman/templates/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: pacman            # Chart.yaml 파일에 설정된 이름을 가져와 설정
  labels:
    app.kubernetes.io/name: pacman     # Chart.yaml 파일에 appVersion 여부에 따라 버전을 설정
    app.kubernetes.io/version: "1.1.0"     # appVersion 값을 가져와 지정하고 따움표 처리
spec:
  replicas: 1     # replicaCount 속성을 넣을 자리 placeholder
  selector:
    matchLabels:
      app.kubernetes.io/name: pacman   # pacman.selectorLabels를 호출한 결과를 6만큼 들여쓰기하여 주입
  template:
    metadata:
      labels:
        app.kubernetes.io/name: pacman # pacman.selectorLabels를 호출한 결과를 8만큼 들여쓰기하여 주입
    spec:
      containers:
        - image: "quay.io/gitops-cookbook/pacman-kikd:1.2.0"   # 이미지 지정 placeholder, 이미지 태그가 있으면 넣고, 없으면 Chart.yaml에 값을 설정
          imagePullPolicy: Always
          securityContext:
              {} # securityContext의 값을 YAML 객체로 지정하며 14칸 들여쓰기
          name: pacman
          ports:
            - containerPort: 8080
              name: http
              protocol: TCP

 

 

5.4 Packaging and Distributing a Helm Chart

helm chart를 패키징하고 공개하여 다른 차트의 의존성으로 이용될 수 있도록 하거나 다른 사용자가 시스템에 배포할 수 있도록 하는 방법을 알아보자

# pacman 차트를 .tgz 파일로 패키징
minji  ~/Desktop/work/Gasida_series/practice/pacman  helm package .
Successfully packaged chart and saved it to: /Users/howoo/Desktop/work/Gasida_series/practice/pacman/pacman-0.1.0.tgz

minji  ~/Desktop/work/Gasida_series/practice/pacman  gzcat pacman-0.1.0.tgz
pacman/Chart.yaml0000644000000000000000000000016415076440060012433 0ustar0000000000000000apiVersion: v2
appVersion: 1.1.0
description: A Helm chart for Pacman
name: pacman
type: application
version: 0.1.0
pacman/values.yaml0000644000000000000000000000023015076440060012663 0ustar0000000000000000image:
  repository: quay.io/gitops-cookbook/pacman-kikd
  tag: "1.1.0"
  pullPolicy: Always
  containerPort: 8080

replicaCount: 1
securityContext: {}
pacman/templates/_helpers.tpl0000644000000000000000000000013315076440060015022 0ustar0000000000000000{{- define "pacman.selectorLabels" -}}
app.kubernetes.io/name: {{ .Chart.Name}}
{{- end }}
pacman/templates/deployment.yaml0000644000000000000000000000276515076440060015561 0ustar0000000000000000apiVersion: apps/v1
kind: Deployment
metadata:
  name: {{ .Chart.Name}}            # Chart.yaml 파일에 설정된 이름을 가져와 설정
  labels:
    app.kubernetes.io/name: {{ .Chart.Name}}
    {{- if .Chart.AppVersion }}     # Chart.yaml 파일에 appVersion 여부에 따라 버전을 설정
    app.kubernetes.io/version: {{ .Chart.AppVersion | quote }}     # appVersion 값을 가져와 지정하고 따움표 처리
    {{- end }}
spec:
  replicas: {{ .Values.replicaCount }}     # replicaCount 속성을 넣을 자리 placeholder
  selector:
    matchLabels:
      {{- include "pacman.selectorLabels" . | nindent 6 }}   # pacman.selectorLabels를 호출한 결과를 6만큼 들여쓰기하여 주입
  template:
    metadata:
      labels:
        {{- include "pacman.selectorLabels" . | nindent 8 }} # pacman.selectorLabels를 호출한 결과를 8만큼 들여쓰기하여 주입
    spec:
      containers:
        - image: "{{ .Values.image.repository }}:{{ .Values.image.tag | default .Chart.AppVersion}}"   # 이미지 지정 placeholder, 이미지 태그가 있으면 넣고, 없으면 Chart.yaml에 값을 설정
          imagePullPolicy: {{ .Values.image.pullPolicy }}
          securityContext:
            {{- toYaml .Values.securityContext | nindent 14 }} # securityContext의 값을 YAML 객체로 지정하며 14칸 들여쓰기
          name: {{ .Chart.Name}}
          ports:
            - containerPort: {{ .Values.image.containerPort }}
              name: http
              protocol: TCP
pacman/templates/service.yaml0000644000000000000000000000050015076440060015022 0ustar0000000000000000apiVersion: v1
kind: Service
metadata:
  labels:
    app.kubernetes.io/name: {{ .Chart.Name }}
  name: {{ .Chart.Name }}
spec:
  ports:
    - name: http
      port: {{ .Values.image.containerPort }}
      targetPort: {{ .Values.image.containerPort }}
  selector:
    {{- include "pacman.selectorLabels" . | nindent 6 }}
pacman/newvalues.yaml0000644000000000000000000000002615076440060013400 0ustar0000000000000000image:
  tag: "1.2.0"
  
# 해당 차트를 차트 저장소 repository 에 게시
# 차트 저장소는 차트 및 .tgz 차트에 대한 메타데이터 정보를 담은 index.html 파일이 있는 HTTP 서버
# 차트를 저장소에 게시하려면 index.html 파일을 새 메타데이터 정보로 업데이트하고 아티팩트를 업로드해야 한다.

# index.html file 생성
minji  ~/Desktop/work/Gasida_series/practice/pacman  helm repo index .
minji  ~/Desktop/work/Gasida_series/practice/pacman 
 
minji  ~/Desktop/work/Gasida_series/practice/pacman  cat index.yaml
apiVersion: v1
entries:
  pacman:
  - apiVersion: v2
    appVersion: 1.1.0
    created: "2025-10-23T23:50:45.092915+09:00"
    description: A Helm chart for Pacman
    digest: ecf7e2eb903a569ede9d11c989f76895d72850c653b630aa213835e5efe9a2b8
    name: pacman
    type: application
    urls:
    - pacman-0.1.0.tgz
    version: 0.1.0
generated: "2025-10-23T23:50:45.092641+09:00"

 

 

※ Bitnami 공개 카탈로그 삭제

Bitnamo Helm Charts : 컨테이너 이미지 Tag ( latest )

# Bitnami nginx 의 OCI 주소
oci://registry-1.docker.io/bitnamicharts/nginx

# 기존 식 helm repo 확인
# helm 저장소를 추가 하지 않았으므로 Error
minji  ~  helm repo list
Error: no repositories to show

# helm chart 가져오기
minji  ~  helm pull oci://registry-1.docker.io/bitnamicharts/nginx --version 22.0.11
Pulled: registry-1.docker.io/bitnamicharts/nginx:22.0.11
Digest: sha256:22c9a95eced446e53f75fa41764059812049cfcbabe273942ea46b69183b496d

# 파일 목록 확인
minji  ~  tar -tf nginx-22.0.11.tgz
nginx/
nginx/charts/
nginx/charts/common/
nginx/charts/common/templates/
nginx/charts/common/templates/validations/
nginx/templates/
                 .
                 .
                 .

# helm show 명령
## helm show readme oci://registry-1.docker.io/bitnamicharts/nginx
                 .
                 .
                 .                
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at

<http://www.apache.org/licenses/LICENSE-2.0>

Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

## helm show values oci://registry-1.docker.io/bitnamicharts/nginx
  						    .
							    .
							    .
  ## @param metrics.startupProbe.successThreshold Success threshold for startupProbe

  ##
  startupProbe:
    enabled: false
    initialDelaySeconds: 5
    timeoutSeconds: 3
    periodSeconds: 5
    failureThreshold: 10
    successThreshold: 1


## helm show chart oci://registry-1.docker.io/bitnamicharts/nginx
								.
								.
								.
icon: https://dyltqmyl993wv.cloudfront.net/assets/stacks/nginx/img/nginx-stack-220x234.png
keywords:
- nginx
- http
- web
- www
- reverse proxy
maintainers:
- name: Broadcom, Inc. All Rights Reserved.
  url: https://github.com/bitnami/charts
name: nginx
sources:
- https://github.com/bitnami/charts/tree/main/bitnami/nginx
version: 22.1.1

# helm chart 바로 설치
minji  ~  helm install my-nginx oci://registry-1.docker.io/bitnamicharts/nginx --version 22.0.11
Pulled: registry-1.docker.io/bitnamicharts/nginx:22.0.11
Digest: sha256:22c9a95eced446e53f75fa41764059812049cfcbabe273942ea46b69183b496d
NAME: my-nginx
LAST DEPLOYED: Fri Oct 24 20:30:07 2025
NAMESPACE: default
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
CHART NAME: nginx
CHART VERSION: 22.0.11
APP VERSION: 1.29.2
									.
									.
									.

minji  ~  helm repo list
Error: no repositories to show

# helm 확인
minji  ~  helm repo list
Error: no repositories to show

minji  ~  helm list
NAME    	NAMESPACE	REVISION	UPDATED                             	STATUS  	CHART        	APP VERSION
my-nginx	default  	1       	2025-10-24 20:30:07.731724 +0900 KST	deployed	nginx-22.0.11	1.29.2
pacman  	default  	4       	2025-10-23 23:40:12.59885 +0900 KST 	deployed	pacman-0.1.0 	1.0.0
 minji  ~  helm get metadata my-nginx
NAME: my-nginx
CHART: nginx
VERSION: 22.0.11
APP_VERSION: 1.29.2
ANNOTATIONS: fips=true,images=- name: git
  version: 2.51.0
  image: registry-1.docker.io/bitnami/git:latest
- name: nginx
  version: 1.29.2
  image: registry-1.docker.io/bitnami/nginx:latest
- name: nginx-exporter
  version: 1.5.0
  image: registry-1.docker.io/bitnami/nginx-exporter:latest
,licenses=Apache-2.0,tanzuCategory=clusterUtility
DEPENDENCIES: common
NAMESPACE: default
REVISION: 1
STATUS: deployed
DEPLOYED_AT: 2025-10-24T20:30:07+09:00

# deployment 확인 : IMAGES tags 확인
minji  ~  kubectl get deploy -owide
NAME       READY   UP-TO-DATE   AVAILABLE   AGE     CONTAINERS   IMAGES                                      SELECTOR
my-nginx   1/1     1            1           2m30s   nginx        registry-1.docker.io/bitnami/nginx:latest   app.kubernetes.io/instance=my-nginx,app.kubernetes.io/name=nginx
pacman     1/1     1            1           21h     pacman       quay.io/gitops-cookbook/pacman-kikd:1.0.0   app.kubernetes.io/name=pacman

minji  ~  helm get manifest my-nginx | grep 'image:'
          image: registry-1.docker.io/bitnami/nginx:latest
          image: registry-1.docker.io/bitnami/nginx:latest
          
# 삭제
minji  ~  helm uninstall my-nginx
release "my-nginx" uninstalled

 

 

5.5 Deploying a Chart from a Repository

Bitnami / postgresql 배포 실습

# repo
minji  ~  helm repo add bitnami https://charts.bitnami.com/bitnami
"bitnami" has been added to your repositories

minji  ~  helm repo list
NAME   	URL
bitnami	https://charts.bitnami.com/bitnami

minji  ~  helm search repo postgresql
NAME                  	CHART VERSION	APP VERSION	DESCRIPTION
bitnami/postgresql    	18.1.1       	18.0.0     	PostgreSQL (Postgres) is an open source object-...
bitnami/postgresql-ha 	16.3.2       	17.6.0     	This PostgreSQL cluster solution includes the P...
bitnami/cloudnative-pg	1.0.11       	1.26.1     	CloudNativePG is an open-source tool for managi...
bitnami/supabase      	5.3.6        	1.24.7     	DEPRECATED Supabase is an open source Firebase ...
bitnami/minio-operator	0.2.9        	7.1.1      	MinIO(R) Operator is a Kubernetes-native tool f...

minji  ~  helm search repo postgresql -o json | jq
[
  {
    "name": "bitnami/postgresql",
    "version": "18.1.1",
    "app_version": "18.0.0",
    "description": "PostgreSQL (Postgres) is an open source object-relational database known for reliability and data integrity. ACID-compliant, it supports foreign keys, joins, views, triggers and stored procedures."
  },
  {
    "name": "bitnami/postgresql-ha",
    "version": "16.3.2",
    "app_version": "17.6.0",
    "description": "This PostgreSQL cluster solution includes the PostgreSQL replication manager, an open-source tool for managing replication and failover on PostgreSQL clusters."
  },
  {
    "name": "bitnami/cloudnative-pg",
    "version": "1.0.11",
    "app_version": "1.26.1",
    "description": "CloudNativePG is an open-source tool for managing PostgreSQL databases on Kubernetes, from setup to ongoing upkeep."
  },
  {
    "name": "bitnami/supabase",
    "version": "5.3.6",
    "app_version": "1.24.7",
    "description": "DEPRECATED Supabase is an open source Firebase alternative. Provides all the necessary backend features to build your application in a scalable way. Uses PostgreSQL as datastore."
  },
  {
    "name": "bitnami/minio-operator",
    "version": "0.2.9",
    "app_version": "7.1.1",
    "description": "MinIO(R) Operator is a Kubernetes-native tool for deploying and managing high-performance, S3-compatible MinIO(R) object storage across hybrid cloud infrastructures."
  }
]


# 배포
minji  ~  helm install my-db \
--set postgresql.postgresqlUsername=my-default,postgresql.postgresqlPassword=postgres,postgresql.postgresqlDatabase=mydb,postgresql.persistence.enabled=false \
bitnami/postgresql
NAME: my-db
LAST DEPLOYED: Fri Oct 24 20:36:45 2025
NAMESPACE: default
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
CHART NAME: postgresql
CHART VERSION: 18.1.1
APP VERSION: 18.0.0
														.
														.
														.
WARNING: There are "resources" sections in the chart not set. Using "resourcesPreset" is not recommended for production. For production installations, please set the following values according to your workload needs:
  - primary.resources
  - readReplicas.resources
+info https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/

# 확인
minji  ~  helm list
NAME  	NAMESPACE	REVISION	UPDATED                             	STATUS  	CHART            	APP VERSION
my-db 	default  	1       	2025-10-24 20:36:45.218051 +0900 KST	deployed	postgresql-18.1.1	18.0.0
pacman	default  	4       	2025-10-23 23:40:12.59885 +0900 KST 	deployed	pacman-0.1.0     	1.0.0

minji  ~  kubectl get sts,pod,svc,ep,secret
NAME                                READY   AGE
statefulset.apps/my-db-postgresql   1/1     77s

NAME                          READY   STATUS    RESTARTS      AGE
pod/my-db-postgresql-0        1/1     Running   0             77s
pod/pacman-576769bb86-t5scc   1/1     Running   2 (18m ago)   20h

NAME                          TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)    AGE
service/kubernetes            ClusterIP   10.96.0.1      <none>        443/TCP    22h
service/my-db-postgresql      ClusterIP   10.96.23.246   <none>        5432/TCP   77s
service/my-db-postgresql-hl   ClusterIP   None           <none>        5432/TCP   77s
service/pacman                ClusterIP   10.96.105.28   <none>        8080/TCP   21h

NAME                            ENDPOINTS           AGE
endpoints/kubernetes            192.168.97.2:6443   22h
endpoints/my-db-postgresql      10.244.0.8:5432     77s
endpoints/my-db-postgresql-hl   10.244.0.8:5432     77s
endpoints/pacman                10.244.0.4:8080     21h

NAME                                  TYPE                 DATA   AGE
secret/my-db-postgresql               Opaque               1      77s
secret/sh.helm.release.v1.my-db.v1    helm.sh/release.v1   1      77s
secret/sh.helm.release.v1.pacman.v1   helm.sh/release.v1   1      21h
secret/sh.helm.release.v1.pacman.v2   helm.sh/release.v1   1      21h
secret/sh.helm.release.v1.pacman.v3   helm.sh/release.v1   1      20h
secret/sh.helm.release.v1.pacman.v4   helm.sh/release.v1   1      20h

# 서드 파티 챠트 사용시 기본값(default value)나 override 파라미터를 직접 확인 할 수 없고, helm show 로 확인 가능
helm show values bitnami/postgresql
																.
																.
																.
## rules:
##   - alert: HugeReplicationLag
##     expr: pg_replication_lag{service="{{ printf "%s-metrics" (include "postgresql.v1.chart.fullname" .) }}"} / 3600 > 1
##     for: 1m
##     labels:
##       severity: critical
##     annotations:
##       description: replication for {{ include "postgresql.v1.chart.fullname" . }} PostgreSQL is lagging by {{ "{{ $value }}" }} hour(s).
##       summary: PostgreSQL replication is lagging by {{ "{{ $value }}" }} hour(s).
##
rules: []

# 실습 후 삭제
minji  ~  helm uninstall my-db
release "my-db" uninstalled

 

 

5.6 Deploying a Chart with a dependency

다른 챠트를 의존성으로 사용하는 차트를 배포

실습에서는 PostgreSQL, 데이터베이스에 저장된 노래 목록을 반환하는 Java 서비스를 배포

## 
mkdir music
mkdir music/templates
cd music

##
minji  ~/Desktop/work/Gasida_series/practice/music  cat << EOF > templates/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: {{ .Chart.Name}}
  labels:
    app.kubernetes.io/name: {{ .Chart.Name}}
    {{- if .Chart.AppVersion }}
    app.kubernetes.io/version: {{ .Chart.AppVersion | quote }}
    {{- end }}
spec:
  replicas: {{ .Values.replicaCount }}
  selector:
    matchLabels:
      app.kubernetes.io/name: {{ .Chart.Name}}
  template:
    metadata:
      labels:
        app.kubernetes.io/name: {{ .Chart.Name}}
    spec:
      containers:
        - image: "{{ .Values.image.repository }}:{{ .Values.image.tag | default .Chart.AppVersion}}"
          imagePullPolicy: {{ .Values.image.pullPolicy }}
          name: {{ .Chart.Name}}
          ports:
            - containerPort: {{ .Values.image.containerPort }}
              name: http
              protocol: TCP
          env:
            - name: QUARKUS_DATASOURCE_JDBC_URL
              value: {{ .Values.postgresql.server | default (printf "%s-postgresql" ( .Release.Name )) | quote }}
            - name: QUARKUS_DATASOURCE_USERNAME
              value: {{ .Values.postgresql.postgresqlUsername | default (printf "postgres" ) | quote }}
            - name: QUARKUS_DATASOURCE_PASSWORD
              valueFrom:
                secretKeyRef:
                  name: {{ .Values.postgresql.secretName | default (printf "%s-postgresql" ( .Release.Name )) | quote }}
                  key: {{ .Values.postgresql.secretKey }}
EOF

##
minji  ~/Desktop/work/Gasida_series/practice/music  cat << EOF > templates/service.yaml
apiVersion: v1
kind: Service
metadata:
  labels:
    app.kubernetes.io/name: {{ .Chart.Name }}
  name: {{ .Chart.Name }}
spec:
  ports:
    - name: http
      port: {{ .Values.image.containerPort }}
      targetPort: {{ .Values.image.containerPort }}
  selector:
    app.kubernetes.io/name: {{ .Chart.Name }}
EOF

## psql 10.16.2 차트 책 버전 사용 시
minji  ~/Desktop/work/Gasida_series/practice/music  cat << EOF > Chart.yaml
apiVersion: v2
name: music
description: A Helm chart for Music service
type: application
version: 0.1.0
appVersion: "1.0.0"
dependencies:
  - name: postgresql
    version: 10.16.2
    repository: "https://charts.bitnami.com/bitnami"
EOF

##
minji  ~/Desktop/work/Gasida_series/practice/music  helm search repo postgresql
NAME                  	CHART VERSION	APP VERSION	DESCRIPTION
bitnami/postgresql    	18.1.1       	18.0.0     	PostgreSQL (Postgres) is an open source object-...
bitnami/postgresql-ha 	16.3.2       	17.6.0     	This PostgreSQL cluster solution includes the P...
bitnami/cloudnative-pg	1.0.11       	1.26.1     	CloudNativePG is an open-source tool for managi...
bitnami/supabase      	5.3.6        	1.24.7     	DEPRECATED Supabase is an open source Firebase ...
bitnami/minio-operator	0.2.9        	7.1.1      	MinIO(R) Operator is a Kubernetes-native tool f...

##
minji  ~/Desktop/work/Gasida_series/practice/music  helm search repo bitnami/postgresql --versions
NAME                 	CHART VERSION	APP VERSION	DESCRIPTION
bitnami/postgresql   	18.1.1       	18.0.0     	PostgreSQL (Postgres) is an open source object-...
bitnami/postgresql   	18.0.17      	18.0.0     	PostgreSQL (Postgres) is an open source object-...
bitnami/postgresql   	18.0.16      	18.0.0     	PostgreSQL (Postgres) is an open source object-...
bitnami/postgresql   	18.0.15      	18.0.0     	PostgreSQL (Postgres) is an open source object-...
bitnami/postgresql   	18.0.14      	18.0.0     	PostgreSQL (Postgres) is an open source object-...
bitnami/postgresql   	18.0.12      	18.0.0     	PostgreSQL (Postgres) is an open source object-...
																			.
																			.
																			.

## 현재 최신 챠트 버전 사용
minji  ~/Desktop/work/Gasida_series/practice/music  cat << EOF > Chart.yaml
apiVersion: v2
name: music
description: A Helm chart for Music service
type: application
version: 0.1.0
appVersion: "1.0.0"
dependencies:
  - name: postgresql
    version: 18.0.17 # book 10.16.2
    repository: "https://charts.bitnami.com/bitnami"
EOF

##
minji  ~/Desktop/work/Gasida_series/practice/music  cat << EOF > values.yaml
image:
  repository: quay.io/gitops-cookbook/music
  tag: "1.0.0"
  pullPolicy: Always
  containerPort: 8080

replicaCount: 1

postgresql:
  server: jdbc:postgresql://music-db-postgresql:5432/mydb
  postgresqlUsername: my-default
  postgresqlPassword: postgres
  postgresqlDatabase: mydb
  secretName: music-db-postgresql
  secretKey: postgresql-password
EOF

##
minji  ~/Desktop/work/Gasida_series/practice/music  cat << EOF > values.yaml
image:
  repository: quay.io/gitops-cookbook/music
  tag: "1.0.0"
  pullPolicy: Always
  containerPort: 8080

replicaCount: 1

postgresql:
  server: jdbc:postgresql://music-db-postgresql:5432/mydb
  postgresqlUsername: my-default
  postgresqlPassword: postgres
  postgresqlDatabase: mydb
  secretName: music-db-postgresql
  secretKey: postgresql-password
EOF

# 의존성으로 선언된 챠트를 다운로드하여 챠트 디렉터리에 저장
minji  ~/Desktop/work/Gasida_series/practice/music  helm dependency update
Hang tight while we grab the latest from your chart repositories...
...Successfully got an update from the "bitnami" chart repository
Update Complete. ⎈Happy Helming!⎈
Saving 1 charts
Downloading postgresql from repo https://charts.bitnami.com/bitnami
Pulled: registry-1.docker.io/bitnamicharts/postgresql:18.0.17
Digest: sha256:84b63af46f41ac35e3cbcf098e8cf124211c250807cfed43f7983c39c6e30b72
Deleting outdated charts

##
minji  ~/Desktop/work/Gasida_series/practice/music  tree
.
├── Chart.lock
├── Chart.yaml
├── charts
│   └── postgresql-18.0.17.tgz
├── templates
│   ├── deployment.yaml
│   └── service.yaml
└── values.yaml

3 directories, 6 files

# 챠트 배포
minji  ~/Desktop/work/Gasida_series/practice/music  helm install music-db .
NAME: music-db
LAST DEPLOYED: Fri Oct 24 20:51:28 2025
NAMESPACE: default
STATUS: deployed
REVISION: 1
TEST SUITE: None

# 확인
minji  ~/Desktop/work/Gasida_series/practice/music  kubectl get sts,pod,svc,ep,secret,pv,pvc
NAME                                   READY   AGE
statefulset.apps/music-db-postgresql   1/1     93s

NAME                          READY   STATUS                       RESTARTS      AGE
pod/music-6c45d566f4-v7btz    0/1     CreateContainerConfigError   0             93s
pod/music-db-postgresql-0     1/1     Running                      0             93s
pod/pacman-576769bb86-t5scc   1/1     Running                      2 (33m ago)   21h

NAME                             TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)    AGE
service/kubernetes               ClusterIP   10.96.0.1      <none>        443/TCP    22h
service/music                    ClusterIP   10.96.45.200   <none>        8080/TCP   93s
service/music-db-postgresql      ClusterIP   10.96.131.73   <none>        5432/TCP   93s
service/music-db-postgresql-hl   ClusterIP   None           <none>        5432/TCP   93s
service/pacman                   ClusterIP   10.96.105.28   <none>        8080/TCP   21h

NAME                               ENDPOINTS           AGE
endpoints/kubernetes               192.168.97.2:6443   22h
endpoints/music                                        93s
endpoints/music-db-postgresql      10.244.0.11:5432    93s
endpoints/music-db-postgresql-hl   10.244.0.11:5432    93s
endpoints/pacman                   10.244.0.4:8080     21h

NAME                                    TYPE                 DATA   AGE
secret/music-db-postgresql              Opaque               1      93s
secret/sh.helm.release.v1.music-db.v1   helm.sh/release.v1   1      93s
secret/sh.helm.release.v1.pacman.v1     helm.sh/release.v1   1      21h
secret/sh.helm.release.v1.pacman.v2     helm.sh/release.v1   1      21h
secret/sh.helm.release.v1.pacman.v3     helm.sh/release.v1   1      21h
secret/sh.helm.release.v1.pacman.v4     helm.sh/release.v1   1      21h

NAME                                                        CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                                STORAGECLASS   VOLUMEATTRIBUTESCLASS   REASON   AGE
persistentvolume/pvc-5c9948cf-c5fe-41b3-ba23-33453f80eaf2   8Gi        RWO            Delete           Bound    default/data-my-db-postgresql-0      standard       <unset>                          16m
persistentvolume/pvc-947648b2-8fb2-43c3-816a-11078b1594f3   8Gi        RWO            Delete           Bound    default/data-music-db-postgresql-0   standard       <unset>                          91s

NAME                                               STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   VOLUMEATTRIBUTESCLASS   AGE
persistentvolumeclaim/data-music-db-postgresql-0   Bound    pvc-947648b2-8fb2-43c3-816a-11078b1594f3   8Gi        RWO            standard       <unset>                 93s
persistentvolumeclaim/data-my-db-postgresql-0      Bound    pvc-5c9948cf-c5fe-41b3-ba23-33453f80eaf2   8Gi        RWO            standard       <unset>                 16m

# TS 1 : secret에 key/value 추가
kubectl edit secret music-db-postgresql
postgresql-password: cG9zdGdyZXMK

# TS2 : 직접 해결해보자!
kubectl logs -l app.kubernetes.io/name=music -f
__  ____  __  _____   ___  __ ____  ______
 --/ __ \/ / / / _ | / _ \/ //_/ / / / __/
 -/ /_/ / /_/ / __ |/ , _/ ,< / /_/ /\ \
--\___\_\____/_/ |_/_/|_/_/|_|\____/___/
2025-10-24 11:58:11,847 WARN  [io.agr.pool] (agroal-11) Datasource '<default>': Something unusual has occurred to cause the driver to fail. Please report this exception.
2025-10-24 11:58:11,876 WARN  [org.hib.eng.jdb.env.int.JdbcEnvironmentInitiator] (JPA Startup Thread: <default>) HHH000342: Could not obtain connection to query metadata: org.postgresql.util.PSQLException: Something unusual has occurred to cause the driver to fail. Please report this exception.
	at org.postgresql.Driver.connect(Driver.java:286)
																.
																.
																.

# music service 에 port-forward 설정 후 호출하여 노래 목록 확인
minji  ~/Desktop/work/Gasida_series/practice/music  kubectl port-forward service/music 8080:8080
Forwarding from 127.0.0.1:8080 -> 8080
Forwarding from [::1]:8080 -> 8080

minji  ~/Desktop/work/Gasida_series/practice/music  helm uninstall music-db
release "music-db" uninstalled
 minji  ~/Desktop/work/Gasida_series/practice/music  kubectl delete pvc --all
persistentvolumeclaim "data-music-db-postgresql-0" deleted

 

 

6장 Cloud Native CI/CD

  • 기본적으로 지속적 통합 ( CI )은 개발자가 만들 새 코드를 가져와서 빌드, 테스트, 실행하는 과정을 자동으로 처리하는 프로세스
  • 텍톤이란 ?
    • 쿠버네티스 기반 오픈 소스 클라우드 네이티브 CI/CD 시스템이다
  • 기본 개념
    • Task
      • 특정 기능 (예: 컨테이너 이미지 빌드)을 수행하는 재사용 가능하고 느슨하게 결합된 여러 개의 단계(steps).
      • 태스크는 쿠버네티스 파드로 실행되고, 태스트의 각 단계는 컨테이너에 대응된다.
    • Pipeline
      • 앱을 빌드 및 또는 배포하는 데 필요한 Task의 목록
    • TaskRun
      • Task 인스턴스의 실행 및 그 결과
    • PipelineRun
      • Pipeline 인스턴스의 실행 및 그 결과. 다수의 TaskRun 포함
    • Trigger
      • 이벤트를 감지하고 다른 CRD에 연결하여 해당 이벤트가 발생했을 때 어떤 일이 발생하는지 지정.
  • 구성요소
    • 텍톤은 모듈식 구조로 되어 있다. 모든 구성 요소를 개별적으로 또는 한 번에 설치할 수 있다.
    • Tekton Pilelines
      • Task 및 Pipeline포함
    • Tekton Triggers
      • Trigger 및 EventListener 포함
    • Tekton Dashbaord
      • 파이프라인과 로그를 시각화 할 수 있는 대시보드
    • Tekton CLI
      • 텍톤 객체를 관리하기 위한 CLI (파이프라인 및 작업 시작/중지, 로그 확인)
  • 텍톤 flow

 

6.1 install tekton

Pipeline

# Takton dependency pipeline 설치
minji  ~/Desktop/work/Gasida_series/practice/music  kubectl apply -f https://storage.googleapis.com/tekton-releases/pipeline/latest/release.yaml
namespace/tekton-pipelines created
clusterrole.rbac.authorization.k8s.io/tekton-pipelines-controller-cluster-access created
clusterrole.rbac.authorization.k8s.io/tekton-pipelines-controller-tenant-access created
clusterrole.rbac.authorization.k8s.io/tekton-pipelines-webhook-cluster-access created
clusterrole.rbac.authorization.k8s.io/tekton-events-controller-cluster-access created
role.rbac.authorization.k8s.io/tekton-pipelines-controller created
role.rbac.authorization.k8s.io/tekton-pipelines-webhook created
																.
																.
																.

# Takton dependency pipeline 설치 확인
minji  ~/Desktop/work/Gasida_series/practice/music  kubectl get crd
NAME                                       CREATED AT
customruns.tekton.dev                      2025-10-24T13:14:17Z
pipelineruns.tekton.dev                    2025-10-24T13:14:17Z
pipelines.tekton.dev                       2025-10-24T13:14:17Z
resolutionrequests.resolution.tekton.dev   2025-10-24T13:14:17Z
stepactions.tekton.dev                     2025-10-24T13:14:17Z
taskruns.tekton.dev                        2025-10-24T13:14:17Z
tasks.tekton.dev                           2025-10-24T13:14:17Z
verificationpolicies.tekton.dev            2025-10-24T13:14:17Z

#
minji  ~/Desktop/work/Gasida_series/practice/music  kubectl get ns | grep tekton
tekton-pipelines             Active   81s
tekton-pipelines-resolvers   Active   81s

#
minji  ~/Desktop/work/Gasida_series/practice/music  kubectl krew install get-all
WARNING: To be able to run kubectl plugins, you need to add
the following to your ~/.zshrc:

    export PATH="${KREW_ROOT:-$HOME/.krew}/bin:$PATH"
														    .
														    .
														    .
														    
#
minji  ~/Desktop/work/Gasida_series/practice  kubectl get-all -n tekton-pipelines
NAME                                                                                                                                     NAMESPACE         AGE
configmap/config-defaults                                                                                                                tekton-pipelines  7m
configmap/config-events                                                                                                                  tekton-pipelines  7m
configmap/config-leader-election-controller                                                                                              tekton-pipelines  7m
configmap/config-leader-election-events
														    .
														    .
														    .

# 
minji  ~/Desktop/work/Gasida_series/practice  kubectl get-all -n tekton-pipelines-resolvers
NAME                                                                                  NAMESPACE                   AGE
configmap/bundleresolver-config                                                       tekton-pipelines-resolvers  7m36s
configmap/cluster-resolver-config                                                     tekton-pipelines-resolvers  7m36s
configmap/config-leader-election-resolvers                                            tekton-pipelines-resolvers  7m36s
configmap/config-logging                                                              tekton-pipelines-resolvers  7m36s
configmap/config-observability                                                        tekton-pipelines-resolvers  7m36s
configmap/git-resolver-config                                                         tekton-pipelines-resolvers  7m36s
configmap/http-resolver-config                                                        tekton-pipelines-resolvers  7m36s
configmap/hubresolver-config                                                          tekton-pipelines-resolvers  7m36s
configmap/kube-root-ca.crt                                                            tekton-pipelines-resolvers  7m36s
configmap/resolvers-feature-flags                                                     tekton-pipelines-resolvers  7m36s
endpoints/tekton-pipelines-remote-resolvers                                           tekton-pipelines-resolvers  7m36s
pod/tekton-pipelines-remote-resolvers-86f56b6664-rjhxv                                tekton-pipelines-resolvers  7m36s
serviceaccount/default                                                                tekton-pipelines-resolvers  7m36s
serviceaccount/tekton-pipelines-resolvers                                             tekton-pipelines-resolvers  7m36s
service/tekton-pipelines-remote-resolvers                                             tekton-pipelines-resolvers  7m36s
deployment.apps/tekton-pipelines-remote-resolvers                                     tekton-pipelines-resolvers  7m36s
replicaset.apps/tekton-pipelines-remote-resolvers-86f56b6664                          tekton-pipelines-resolvers  7m36s
lease.coordination.k8s.io/controller.tektonresolverframework.bundleresolver.00-of-01  tekton-pipelines-resolvers  7m4s
lease.coordination.k8s.io/controller.tektonresolverframework.cluster.00-of-01         tekton-pipelines-resolvers  7m4s
lease.coordination.k8s.io/controller.tektonresolverframework.git.00-of-01             tekton-pipelines-resolvers  7m4s
lease.coordination.k8s.io/controller.tektonresolverframework.http.00-of-01            tekton-pipelines-resolvers  7m4s
lease.coordination.k8s.io/controller.tektonresolverframework.hub.00-of-01             tekton-pipelines-resolvers  7m4s
endpointslice.discovery.k8s.io/tekton-pipelines-remote-resolvers-gtrch                tekton-pipelines-resolvers  7m36s
rolebinding.rbac.authorization.k8s.io/tekton-pipelines-resolvers-namespace-rbac       tekton-pipelines-resolvers  7m36s
role.rbac.authorization.k8s.io/tekton-pipelines-resolvers-namespace-rbac              tekton-pipelines-resolvers  7m36s

#
minji  ~/Desktop/work/Gasida_series/practice  kubectl get all -n tekton-pipelines-resolvers
NAME                                                     READY   STATUS    RESTARTS   AGE
pod/tekton-pipelines-remote-resolvers-86f56b6664-rjhxv   1/1     Running   0          7m47s

NAME                                        TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)                      AGE
service/tekton-pipelines-remote-resolvers   ClusterIP   10.96.65.6   <none>        9090/TCP,8008/TCP,8080/TCP   7m47s

NAME                                                READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/tekton-pipelines-remote-resolvers   1/1     1            1           7m47s

NAME                                                           DESIRED   CURRENT   READY   AGE
replicaset.apps/tekton-pipelines-remote-resolvers-86f56b6664   1         1         1       7m47s

# pod 확인
minji  ~/Desktop/work/Gasida_series/practice  kubectl get pod -n tekton-pipelines
NAME                                           READY   STATUS    RESTARTS   AGE
tekton-events-controller-99665746c-v4bw8       1/1     Running   0          8m49s
tekton-pipelines-controller-7595d6585d-vjhq8   1/1     Running   0          8m49s
tekton-pipelines-webhook-5967d74cc4-t5th4      1/1     Running   0          8m49s

 

 

Trigger

# Tekton Trigger 설치
minji  ~/Desktop/work/Gasida_series/practice  kubectl apply -f https://storage.googleapis.com/tekton-releases/triggers/latest/release.yaml
clusterrole.rbac.authorization.k8s.io/tekton-triggers-admin created
clusterrole.rbac.authorization.k8s.io/tekton-triggers-core-interceptors created
clusterrole.rbac.authorization.k8s.io/tekton-triggers-core-interceptors-secrets created
clusterrole.rbac.authorization.k8s.io/tekton-triggers-eventlistener-roles created
																		.
																		.
																		.
# 
minji  ~/Desktop/work/Gasida_series/practice  kubectl apply -f https://storage.googleapis.com/tekton-releases/triggers/latest/interceptors.yaml
secret/tekton-triggers-core-interceptors-certs created
deployment.apps/tekton-triggers-core-interceptors created
service/tekton-triggers-core-interceptors created
clusterinterceptor.triggers.tekton.dev/cel created
clusterinterceptor.triggers.tekton.dev/bitbucket created
clusterinterceptor.triggers.tekton.dev/slack created
clusterinterceptor.triggers.tekton.dev/github created
clusterinterceptor.triggers.tekton.dev/gitlab created

# Tekton Trigger 설치 확인
minji  ~/Desktop/work/Gasida_series/practice  kubectl get crd | grep triggers
clusterinterceptors.triggers.tekton.dev      2025-10-24T13:26:39Z
clustertriggerbindings.triggers.tekton.dev   2025-10-24T13:26:39Z
eventlisteners.triggers.tekton.dev           2025-10-24T13:26:39Z
interceptors.triggers.tekton.dev             2025-10-24T13:26:39Z
triggerbindings.triggers.tekton.dev          2025-10-24T13:26:39Z
triggers.triggers.tekton.dev                 2025-10-24T13:26:39Z
triggertemplates.triggers.tekton.dev         2025-10-24T13:26:39Z

#
minji  ~/Desktop/work/Gasida_series/practice  kubectl get all -n tekton-pipelines
NAME                                                     READY   STATUS    RESTARTS   AGE
pod/tekton-events-controller-99665746c-v4bw8             1/1     Running   0          14m
pod/tekton-pipelines-controller-7595d6585d-vjhq8         1/1     Running   0          14m
pod/tekton-pipelines-webhook-5967d74cc4-t5th4            1/1     Running   0          14m
pod/tekton-triggers-controller-74fccfc888-kks9r          1/1     Running   0          118s
pod/tekton-triggers-core-interceptors-7b8dcb59fb-5h42s   1/1     Running   0          74s
pod/tekton-triggers-webhook-5465cc8d5b-9kq58             1/1     Running   0          118s

NAME                                        TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)                              AGE
service/tekton-events-controller            ClusterIP   10.96.87.139    <none>        9090/TCP,8008/TCP,8080/TCP           14m
service/tekton-pipelines-controller         ClusterIP   10.96.138.51    <none>        9090/TCP,8008/TCP,8080/TCP           14m
service/tekton-pipelines-webhook            ClusterIP   10.96.164.74    <none>        9090/TCP,8008/TCP,443/TCP,8080/TCP   14m
service/tekton-triggers-controller          ClusterIP   10.96.129.69    <none>        9000/TCP                             118s
service/tekton-triggers-core-interceptors   ClusterIP   10.96.150.198   <none>        8443/TCP                             74s
service/tekton-triggers-webhook             ClusterIP   10.96.22.249    <none>        443/TCP                              118s

NAME                                                READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/tekton-events-controller            1/1     1            1           14m
deployment.apps/tekton-pipelines-controller         1/1     1            1           14m
deployment.apps/tekton-pipelines-webhook            1/1     1            1           14m
deployment.apps/tekton-triggers-controller          1/1     1            1           118s
deployment.apps/tekton-triggers-core-interceptors   1/1     1            1           74s
deployment.apps/tekton-triggers-webhook             1/1     1            1           118s

NAME                                                           DESIRED   CURRENT   READY   AGE
replicaset.apps/tekton-events-controller-99665746c             1         1         1       14m
replicaset.apps/tekton-pipelines-controller-7595d6585d         1         1         1       14m
replicaset.apps/tekton-pipelines-webhook-5967d74cc4            1         1         1       14m
replicaset.apps/tekton-triggers-controller-74fccfc888          1         1         1       118s
replicaset.apps/tekton-triggers-core-interceptors-7b8dcb59fb   1         1         1       74s
replicaset.apps/tekton-triggers-webhook-5465cc8d5b             1         1         1       118s

NAME                                                           REFERENCE                             TARGETS               MINPODS   MAXPODS   REPLICAS   AGE
horizontalpodautoscaler.autoscaling/tekton-pipelines-webhook   Deployment/tekton-pipelines-webhook   cpu: <unknown>/100%   1         5         1          14m

# Trigger 관련 deployment 설치 확인
minji  ~/Desktop/work/Gasida_series/practice  kubectl get deploy -n tekton-pipelines | grep triggers
tekton-triggers-controller          1/1     1            1           2m41s
tekton-triggers-core-interceptors   1/1     1            1           117s
tekton-triggers-webhook             1/1     1            1           2m41s

 

 

Dashboard

# Tekton dashboard 설치
minji  ~/Desktop/work/Gasida_series/practice  kubectl apply -f https://storage.googleapis.com/tekton-releases/dashboard/latest/release.yaml
customresourcedefinition.apiextensions.k8s.io/extensions.dashboard.tekton.dev created
serviceaccount/tekton-dashboard created
role.rbac.authorization.k8s.io/tekton-dashboard-info created
clusterrole.rbac.authorization.k8s.io/tekton-dashboard-backend-edit created
clusterrole.rbac.authorization.k8s.io/tekton-dashboard-backend-view created
clusterrole.rbac.authorization.k8s.io/tekton-dashboard-tenant-view created
rolebinding.rbac.authorization.k8s.io/tekton-dashboard-info created
clusterrolebinding.rbac.authorization.k8s.io/tekton-dashboard-backend-view created
configmap/dashboard-info created
service/tekton-dashboard created
deployment.apps/tekton-dashboard created
clusterrolebinding.rbac.authorization.k8s.io/tekton-dashboard-tenant-view created
clusterrolebinding.rbac.authorization.k8s.io/tekton-dashboard-pipelines-view created
clusterrolebinding.rbac.authorization.k8s.io/tekton-dashboard-triggers-view created

# Tekton Dashboard 설치 확인
minji  ~/Desktop/work/Gasida_series/practice  kubectl get crd | grep dashboard
extensions.dashboard.tekton.dev              2025-10-24T13:30:07Z

#
minji  ~/Desktop/work/Gasida_series/practice  kubectl get all -n tekton-pipelines
NAME                                                     READY   STATUS    RESTARTS   AGE
pod/tekton-dashboard-7d4499b584-5fxb5                    1/1     Running   0          58s
pod/tekton-events-controller-99665746c-v4bw8             1/1     Running   0          16m
pod/tekton-pipelines-controller-7595d6585d-vjhq8         1/1     Running   0          16m
pod/tekton-pipelines-webhook-5967d74cc4-t5th4            1/1     Running   0          16m
pod/tekton-triggers-controller-74fccfc888-kks9r          1/1     Running   0          4m26s
pod/tekton-triggers-core-interceptors-7b8dcb59fb-5h42s   1/1     Running   0          3m42s
pod/tekton-triggers-webhook-5465cc8d5b-9kq58             1/1     Running   0          4m26s

NAME                                        TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)                              AGE
service/tekton-dashboard                    ClusterIP   10.96.30.117    <none>        9097/TCP                             58s
service/tekton-events-controller            ClusterIP   10.96.87.139    <none>        9090/TCP,8008/TCP,8080/TCP           16m
service/tekton-pipelines-controller         ClusterIP   10.96.138.51    <none>        9090/TCP,8008/TCP,8080/TCP           16m
service/tekton-pipelines-webhook            ClusterIP   10.96.164.74    <none>        9090/TCP,8008/TCP,443/TCP,8080/TCP   16m
service/tekton-triggers-controller          ClusterIP   10.96.129.69    <none>        9000/TCP                             4m26s
service/tekton-triggers-core-interceptors   ClusterIP   10.96.150.198   <none>        8443/TCP                             3m42s
service/tekton-triggers-webhook             ClusterIP   10.96.22.249    <none>        443/TCP                              4m26s

NAME                                                READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/tekton-dashboard                    1/1     1            1           58s
deployment.apps/tekton-events-controller            1/1     1            1           16m
deployment.apps/tekton-pipelines-controller         1/1     1            1           16m
deployment.apps/tekton-pipelines-webhook            1/1     1            1           16m
deployment.apps/tekton-triggers-controller          1/1     1            1           4m26s
deployment.apps/tekton-triggers-core-interceptors   1/1     1            1           3m42s
deployment.apps/tekton-triggers-webhook             1/1     1            1           4m26s

NAME                                                           DESIRED   CURRENT   READY   AGE
replicaset.apps/tekton-dashboard-7d4499b584                    1         1         1       58s
replicaset.apps/tekton-events-controller-99665746c             1         1         1       16m
replicaset.apps/tekton-pipelines-controller-7595d6585d         1         1         1       16m
replicaset.apps/tekton-pipelines-webhook-5967d74cc4            1         1         1       16m
replicaset.apps/tekton-triggers-controller-74fccfc888          1         1         1       4m26s
replicaset.apps/tekton-triggers-core-interceptors-7b8dcb59fb   1         1         1       3m42s
replicaset.apps/tekton-triggers-webhook-5465cc8d5b             1         1         1       4m26s

NAME                                                           REFERENCE                             TARGETS               MINPODS   MAXPODS   REPLICAS   AGE
horizontalpodautoscaler.autoscaling/tekton-pipelines-webhook   Deployment/tekton-pipelines-webhook   cpu: <unknown>/100%   1         5         1          16m

# Dashboard 관련 deployment 설치 확인
minji  ~/Desktop/work/Gasida_series/practice  kubectl get deploy -n tekton-pipelines
NAME                                READY   UP-TO-DATE   AVAILABLE   AGE
tekton-dashboard                    1/1     1            1           2m34s
tekton-events-controller            1/1     1            1           18m
tekton-pipelines-controller         1/1     1            1           18m
tekton-pipelines-webhook            1/1     1            1           18m
tekton-triggers-controller          1/1     1            1           6m2s
tekton-triggers-core-interceptors   1/1     1            1           5m18s
tekton-triggers-webhook             1/1     1            1           6m2s

# service 정보 확인
minji  ~/Desktop/work/Gasida_series/practice  kubectl get svc,ep -n tekton-pipelines tekton-dashboard
NAME                       TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)    AGE
service/tekton-dashboard   ClusterIP   10.96.30.117   <none>        9097/TCP   3m9s

NAME                         ENDPOINTS          AGE
endpoints/tekton-dashboard   10.244.0.25:9097   3m9s

#
minji  ~/Desktop/work/Gasida_series/practice  [200~kubectl get svc -n tekton-pipelines tekton-dashboard -o yaml~
 ✘ minji  ~/Desktop/work/Gasida_series/practice 
 ✘ minji  ~/Desktop/work/Gasida_series/practice  kubectl get svc -n tekton-pipelines tekton-dashboard -o yaml
apiVersion: v1
kind: Service
metadata:
  annotations:
    kubectl.kubernetes.io/last-applied-configuration: |
      {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"app":"tekton-dashboard","app.kubernetes.io/component":"dashboard","app.kubernetes.io/instance":"default","app.kubernetes.io/name":"dashboard","app.kubernetes.io/part-of":"tekton-dashboard","app.kubernetes.io/version":"v0.62.0","dashboard.tekton.dev/release":"v0.62.0","version":"v0.62.0"},"name":"tekton-dashboard","namespace":"tekton-pipelines"},"spec":{"ports":[{"name":"http","port":9097,"protocol":"TCP","targetPort":9097}],"selector":{"app.kubernetes.io/component":"dashboard","app.kubernetes.io/instance":"default","app.kubernetes.io/name":"dashboard","app.kubernetes.io/part-of":"tekton-dashboard"}}}
  creationTimestamp: "2025-10-24T13:30:07Z"
																			  .
																			  .
																			  .

# Service 를 node port 설정 ( nodeport 30000 )
minji  ~/Desktop/work/Gasida_series/practice  kubectl patch svc -n tekton-pipelines tekton-dashboard -p '{"spec":{"type":"NodePort","ports":[{"port":9097,"targetPort":9097,"nodePort":30000}]}}'
service/tekton-dashboard patched
 
#
minji  ~/Desktop/work/Gasida_series/practice  kubectl get svc,ep -n tekton-pipelines tekton-dashboard
NAME                       TYPE       CLUSTER-IP     EXTERNAL-IP   PORT(S)          AGE
service/tekton-dashboard   NodePort   10.96.30.117   <none>        9097:30000/TCP   5m44s

NAME                         ENDPOINTS          AGE
endpoints/tekton-dashboard   10.244.0.25:9097   5m44s

# tekton dashboard 접속
minji  ~/Desktop/work/Gasida_series/practice  open http://localhost:30000

 

 

Tekton CLI 설치

minji  ~/Desktop/work/Gasida_series/practice  brew install tektoncd-cli
==> Auto-updating Homebrew...
Adjust how often this is run with `$HOMEBREW_AUTO_UPDATE_SECS` or disable with
`$HOMEBREW_NO_AUTO_UPDATE=1`. Hide these hints with `$HOMEBREW_NO_ENV_HINTS=1` (see `man brew`).
==> Auto-updated Homebrew!
Updated 2 taps (homebrew/core and homebrew/cask).
==> New Formulae
btllib: Bioinformatics Technology Lab common code library
ghidra: Multi-platform software reverse engineering framework
																	.
																	.
																	.
  
#
minji  ~/Desktop/work/Gasida_series/practice 
 ✘ minji  ~/Desktop/work/Gasida_series/practice 
 ✘ minji  ~/Desktop/work/Gasida_series/practice  tkn version
Client version: 0.42.0
Pipeline version: v1.5.0
Triggers version: v0.33.0
Dashboard version: v0.62.0

 

 

6.2 Create a hello world task

Task 생성

#
minji  ~/Desktop/work/Gasida_series/practice  kubectl explain tasks.tekton.dev
GROUP:      tekton.dev
KIND:       Task
VERSION:    v1

# task 생성
minji  ~/Desktop/work/Gasida_series/practice  cat << EOF | kubectl apply -f -
apiVersion: tekton.dev/v1
kind: Task
metadata:
  name: hello
spec:
  steps:
    - name: echo    # step 이름
      image: alpine # step 수행 컨테이너 이미지
      script: |
        #!/bin/sh
        echo "Hello World"
EOF
task.tekton.dev/hello created

# 확인
minji  ~/Desktop/work/Gasida_series/practice  tkn task list
NAME    DESCRIPTION   AGE
hello                 9 seconds ago

#
minji  ~/Desktop/work/Gasida_series/practice  kubectl get tasks
NAME    AGE
hello   29s

#
minji  ~/Desktop/work/Gasida_series/practice  kubectl get tasks -o yaml
apiVersion: v1
items:
- apiVersion: tekton.dev/v1
  kind: Task
  metadata:
    annotations:
      kubectl.kubernetes.io/last-applied-configuration: |
        {"apiVersion":"tekton.dev/v1","kind":"Task","metadata":{"annotations":{},"name":"hello","namespace":"default"},"spec":{"steps":[{"image":"alpine","name":"echo","script":"#!/bin/sh\necho \"Hello World\"\n"}]}}
    creationTimestamp: "2025-10-24T13:52:15Z"
    generation: 1
    name: hello
    namespace: default
    resourceVersion: "25364"
    uid: b29d6f56-b42c-4856-a412-186fc80cd25d
  spec:
    steps:
    - computeResources: {}
      image: alpine
      name: echo
      script: |
        #!/bin/sh
        echo "Hello World"
kind: List
metadata:
  resourceVersion: ""
  
#
minji  ~/Desktop/work/Gasida_series/practice  kubectl get pod
NAME                      READY   STATUS    RESTARTS       AGE
pacman-576769bb86-t5scc   1/1     Running   2 (154m ago)   23h

 

 

 

tkn CLI로 task 시작

# 신규 터미널 : pod status monitoring
minji  ~  kubectl get pod -w
NAME                      READY   STATUS    RESTARTS       AGE
pacman-576769bb86-t5scc   1/1     Running   2 (156m ago)   23h
hello-run-7rb4h-pod       0/1     Pending   0              0s
hello-run-7rb4h-pod       0/1     Pending   0              0s
hello-run-7rb4h-pod       0/1     Init:0/2   0              0s
hello-run-7rb4h-pod       0/1     Init:1/2   0              7s
hello-run-7rb4h-pod       0/1     PodInitializing   0              11s
hello-run-7rb4h-pod       1/1     Running           0              18s
hello-run-7rb4h-pod       1/1     Running           0              18s
hello-run-7rb4h-pod       0/1     Completed         0              19s
hello-run-7rb4h-pod       0/1     Completed         0              20s

# tkn CLI로 task 시작
minji  ~/Desktop/work/Gasida_series/practice  kubectl get pod
NAME                      READY   STATUS    RESTARTS       AGE
pacman-576769bb86-t5scc   1/1     Running   2 (154m ago)   23h
 minji  ~/Desktop/work/Gasida_series/practice  tk
n task start --showlog hello
TaskRun started: hello-run-7rb4h
Waiting for logs to be available...
[echo] Hello World

# pod 내 "2개의 init 컨테이너, 1개의 컨테이너" 확인
minji  ~/Desktop/work/Gasida_series/practice  kubectl describe pod -l tekton.dev/task=hello

																		.
																		.
																		.

Init Containers:
  prepare:
    Container ID:  containerd://a43c17ce1d040c740775d3aa404a09ea90e048ba7a8a6e4c78008e4e6ec3331f
    Image:         ghcr.io/tektoncd/pipeline/entrypoint-bff0a22da108bc2f16c818c97641a296:v1.5.0@sha256:ff5ee925ff7b08853cc4caa93e5e3e0ee761a2db6ae0a1ae6a0f6f120f170b56
    Image ID:      ghcr.io/tektoncd/pipeline/entrypoint-bff0a22da108bc2f16c818c97641a296@sha256:ff5ee925ff7b08853cc4caa93e5e3e0ee761a2db6ae0a1ae6a0f6f120f170b56

																		.
																		.
																		.

# log 확인
minji  ~/Desktop/work/Gasida_series/practice  kubectl logs -l tekton.dev/task=hello -c prepare
2025/10/24 13:55:43 Entrypoint initialization

minji  ~/Desktop/work/Gasida_series/practice  kubectl logs -l tekton.dev/task=hello -c place-scripts
2025/10/24 13:55:46 Decoded script /tekton/scripts/script-0-tccks

minji  ~/Desktop/work/Gasida_series/practice  kubectl logs -l tekton.dev/task=hello -c step-echo
Hello World

#
minji  ~/Desktop/work/Gasida_series/practice  tkn task logs hello
Hello World

 minji  ~/Desktop/work/Gasida_series/practice  tkn task describe hello
Name:        hello
Namespace:   default

🦶 Steps

 ∙ echo

🗂  Taskruns

NAME              STARTED         DURATION   STATUS
hello-run-7rb4h   5 minutes ago   19s        Succeeded

#
minji  ~/Desktop/work/Gasida_series/practice  tkn taskrun logs
[echo] Hello World

 minji  ~/Desktop/work/Gasida_series/practice  tkn taskrun list
NAME              STARTED         DURATION   STATUS
hello-run-7rb4h   5 minutes ago   19s        Succeeded

# 다음 실습을 위해 taskrun 삭제
minji  ~/Desktop/work/Gasida_series/practice  kubectl delete taskruns --all
taskrun.tekton.dev "hello-run-7rb4h" deleted

 

 

6.3 Create a Task to compile and package an App from git

Tekton을 사용하여 git 저장소에 보관된 app code를 compile하고 패키징하는 작업을 자동화 해보자

지금 하는 내용의 핵심 아이디어는 추후 파이프라인을 만들 떄 쓸 수 있도록 input과 output이 잘 규정된 task를 만드는것

Tekton pipelines를 사용하여 git에서 소스 코드를 복제

# 파이프라인 파일 작성
owoo@ttokkang-ui-MacBookAir  ~/Desktop/work/Gasida_series/practice  cat << EOF | kubectl apply -f -
apiVersion: tekton.dev/v1
kind: Pipeline
metadata:
  name: clone-read
spec:
  description: |
    This pipeline clones a git repo, then echoes the README file to the stout.
  params:     # 매개변수 repo-url
  - name: repo-url
    type: string
    description: The git repo URL to clone from.
  workspaces: # 다운로드할 코드를 저장할 공유 볼륨인 작업 공간을 추가
  - name: shared-data
    description: |
      This workspace contains the cloned repo files, so they can be read by the
      next task.
  tasks:      # task 정의
  - name: fetch-source
    taskRef:
      name: git-clone
    workspaces:
    - name: output
      workspace: shared-data
    params:
    - name: url
      value: \$(params.repo-url)
EOF
pipeline.tekton.dev/clone-read created

# 확인
minji  ~/Desktop/work/Gasida_series/practice  tkn pipeline list
NAME         AGE              LAST RUN   STARTED   DURATION   STATUS
clone-read   36 seconds ago   ---        ---       ---        ---

#
minji  ~/Desktop/work/Gasida_series/practice  tkn pipeline describe
Name:          clone-read
Namespace:     default
Description:   This pipeline clones a git repo, then echoes the README file to the stout.


⚓ Params

 NAME         TYPE     DESCRIPTION              DEFAULT VALUE
 ∙ repo-url   string   The git repo URL to...   ---

📂 Workspaces

 NAME            DESCRIPTION              OPTIONAL
 ∙ shared-data   This workspace cont...   false

🗒  Tasks

 NAME             TASKREF     RUNAFTER   TIMEOUT   PARAMS
 ∙ fetch-source   git-clone              ---       url: string
#
minji  ~/Desktop/work/Gasida_series/practice  kubectl get pipeline
NAME         AGE
clone-read   57s

#
minji  ~/Desktop/work/Gasida_series/practice  kubectl get pipeline -o yaml
apiVersion: v1
items:
- apiVersion: tekton.dev/v1
  kind: Pipeline
  metadata:
    annotations:
      kubectl.kubernetes.io/last-applied-configuration: |
        {"apiVersion":"tekton.dev/v1","kind":"Pipeline","metadata":{"annotations":{},"name":"clone-read","namespace":"default"},"spec":{"description":"This pipeline clones a git repo, then echoes the README file to the stout.\n","params":[{"description":"The git repo URL to clone from.","name":"repo-url","type":"string"}],"tasks":[{"name":"fetch-source","params":[{"name":"url","value":"$(params.repo-url)"}],"taskRef":{"name":"git-clone"},"workspaces":[{"name":"output","workspace":"shared-data"}]}],"workspaces":[{"description":"This workspace contains the cloned repo files, so they can be read by the\nnext task.\n","name":"shared-data"}]}}
    creationTimestamp: "2025-10-24T14:11:08Z"
    generation: 1
    name: clone-read
    namespace: default
    resourceVersion: "29298"
    uid: 835cc5df-8ad7-4ae7-af32-073069f38770
  spec:
    description: |
      This pipeline clones a git repo, then echoes the README file to the stout.
    params:
    - description: The git repo URL to clone from.
      name: repo-url
      type: string
    tasks:
    - name: fetch-source
      params:
      - name: url
        value: $(params.repo-url)
      taskRef:
        kind: Task
        name: git-clone
      workspaces:
      - name: output
        workspace: shared-data
    workspaces:
    - description: |
        This workspace contains the cloned repo files, so they can be read by the
        next task.
      name: shared-data
kind: List
metadata:
  resourceVersion: ""
 minji  ~/Desktop/work/Gasida_series/practice  kubectl get pod
NAME                      READY   STATUS    RESTARTS       AGE
pacman-576769bb86-t5scc   1/1     Running   2 (173m ago)   23h

 

# 파이프라인 실행 : 파이프라인을 인스턴스화하고 실제 값 설정
cat << EOF | kubectl create -f -
apiVersion: tekton.dev/v1
kind: PipelineRun
metadata:
  generateName: clone-read-run-
spec:
  pipelineRef:
    name: clone-read
  taskRunTemplate:
    podTemplate:
      securityContext:
        fsGroup: 65532
  workspaces: # 작업 공간 인스턴스화, PVC 생성
  - name: shared-data
    volumeClaimTemplate:
      spec:
        accessModes:
        - ReadWriteOnce
        resources:
          requests:
            storage: 1Gi
  params:    # 저장소 URL 매개변수 값 설정
  - name: repo-url
    value: https://github.com/tektoncd/website
EOF

#
minji  ~/Desktop/work/Gasida_series/practice  kubectl get pipelineruns -o yaml
apiVersion: v1
items:
- apiVersion: tekton.dev/v1
  kind: PipelineRun
  metadata:
																		.
																		.
																		.
        runningInEnvWithInjectedSidecars: true
        verificationNoMatchPolicy: ignore
    startTime: "2025-10-24T14:14:16Z"
kind: List
metadata:
  resourceVersion: ""
  
#
minji  ~/Desktop/work/Gasida_series/practice  kubectl get pipelineruns
NAME                   SUCCEEDED   REASON           STARTTIME   COMPLETIONTIME
clone-read-run-zjfkj   False       CouldntGetTask   45s         45s

#
minji  ~/Desktop/work/Gasida_series/practice  tkn pipelinerun list
NAME                   STARTED        DURATION   STATUS
clone-read-run-zjfkj   1 minute ago   0s         Failed(CouldntGetTask)

#
minji  ~/Desktop/work/Gasida_series/practice  tkn pipelinerun logs
Pipeline default/clone-read can't be Run; it contains Tasks that don't exist: Couldn't retrieve Task "git-clone": tasks.tekton.dev "git-clone" not found

#
minji  ~/Desktop/work/Gasida_series/practice  tkn pipelinerun logs
Pipeline default/clone-read can't be Run; it contains Tasks that don't exist: Couldn't retrieve Task "git-clone": tasks.tekton.dev "git-clone" not found

 

# 파이프라인에서 git clone 작업을 사용하려면 먼저 클러스터에 설치 필요
minji  ~/Desktop/work/Gasida_series/practice  tkn hub install task git-clone
WARN: This version has been deprecated
Task git-clone(0.9) installed in default namespace

# 추가된 task 확인
minji  ~/Desktop/work/Gasida_series/practice  kubectl get tasks
NAME        AGE
git-clone   43s
hello       27m
 minji  ~/Desktop/work/Gasida_series/practice  kubectl get tasks git-clone -o yaml | kubectl neat | yq
apiVersion: tekton.dev/v1
kind: Task
metadata:
  annotations:
    tekton.dev/categories: Git
																	.
																	.
																	.
																	
# 파이프라인 재실행
minji  ~/Desktop/work/Gasida_series/practice  cat << EOF | kubectl create -f -
apiVersion: tekton.dev/v1
kind: PipelineRun
metadata:
  generateName: clone-read-run-
spec:
  pipelineRef:
    name: clone-read
  taskRunTemplate:
    podTemplate:
      securityContext:
        fsGroup: 65532
  workspaces:
  - name: shared-data
    volumeClaimTemplate:
      spec:
        accessModes:
        - ReadWriteOnce
        resources:
          requests:
            storage: 1Gi
  params:
  - name: repo-url
    value: https://github.com/tektoncd/website
EOF
pipelinerun.tekton.dev/clone-read-run-fk5fb created

#
minji  ~/Desktop/work/Gasida_series/practice  tkn pipelinerun list
NAME                   STARTED          DURATION   STATUS
clone-read-run-fk5fb   24 seconds ago   19s        Succeeded
clone-read-run-zjfkj   6 minutes ago    0s         Failed(CouldntGetTask)

#
minji  ~/Desktop/work/Gasida_series/practice  tkn pipelinerun logs
? Select pipelinerun: )  [Use arrows to move, type to filter]

# pv, pvc 확인
minji  ~/Desktop/work/Gasida_series/practice  kubectl get pod,pv,pvc
NAME                                        READY   STATUS      RESTARTS       AGE
pod/clone-read-run-fk5fb-fetch-source-pod   0/1     Completed   0              100s
pod/pacman-576769bb86-t5scc                 1/1     Running     2 (3h2m ago)   23h

NAME                                                        CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                    STORAGECLASS   VOLUMEATTRIBUTESCLASS   REASON   AGE
persistentvolume/pvc-b5399cd4-c602-4fe4-9424-ee3ea6e76cac   1Gi        RWO            Delete           Bound    default/pvc-fcf9b36883   standard       <unset>                          98s

NAME                                   STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   VOLUMEATTRIBUTESCLASS   AGE
persistentvolumeclaim/pvc-fcf9b36883   Bound    pvc-b5399cd4-c602-4fe4-9424-ee3ea6e76cac   1Gi        RWO            standard       <unset>

 

6.4 Create a task to compile and package an ap from private git

비공개 git 저장소의 앱을 컴파일하고 패키징하는 과정을 tekton으로 자동화

tekton은 git을 위해 2가지 인증 체계를 지원 : basic-auth, ssh

2가지 옵션 모두 k8s secret를 사용하여 자격 증명을 저장하고 이를 tekton 또는 pipeline을 실행하는 serviceaccount에 연결

 

실습 - 샘플 앱 생성 및 git 초기화

# git 초기화
minji  ~/Desktop/work/Gasida_series/practice/my-sample-app  git init
Initialized empty Git repository in /Users/howoo/Desktop/work/Gasida_series/practice/my-sample-app/.git/

# git 사용자 설정
minji  ~/Desktop/work/Gasida_series/practice/my-sample-app   main  git config --global user.name "*********"
minji  ~/Desktop/work/Gasida_series/practice/my-sample-app   main  git config --global user.email "*********"
minji  ~/Desktop/work/Gasida_series/practice/my-sample-app   main  git add .
minji  ~/Desktop/work/Gasida_series/practice/my-sample-app   main ✚  git commit -m "Initial commit - sample app"
[main (root-commit) 88c932e] Initial commit - sample app
 1 file changed, 1 insertion(+)
 create mode 100644 app.js

 

Github remote 연결 및 push

# origin remote 등록:
minji  ~/Desktop/work/Gasida_series/practice/my-sample-app   main  git remote add origin https://github.com/ktokang/my-sample-app.git

# 메인 브랜치 이름을 main으로 변경 (GitHub 기본 브랜치와 맞춤):
minji  ~/Desktop/work/Gasida_series/practice/my-sample-app   main  git branch -M main

# push
minji  ~/Desktop/work/Gasida_series/practice/my-sample-app   main  git push -u origin main
Username for 'https://github.com': *******
Password for 'https://ktokang@github.com':
Enumerating objects: 3, done.
Counting objects: 100% (3/3), done.
Writing objects: 100% (3/3), 247 bytes | 247.00 KiB/s, done.
Total 3 (delta 0), reused 0 (delta 0), pack-reused 0 (from 0)
To https://github.com/ktokang/my-sample-app.git
 * [new branch]      main -> main
branch 'main' set up to track 'origin/main'.

 

 

SSH key로 인증 설정

# ssh key 생성 & 보기
minji  ~/Desktop/work/Gasida_series/practice/my-sample-app   main  ls -al ~/.ssh | grep ed25519
-rw-------@  1 howoo  staff   411 10월 24 23:47 id_ed25519
-rw-r--r--@  1 howoo  staff    98 10월 24 23:47 id_ed25519.pub

# 
minji  ~/Desktop/work/Gasida_series/practice/my-sample-app   main  eval "$(ssh-agent -s)"
Agent pid 8866

#
minji  ~/Desktop/work/Gasida_series/practice/my-sample-app   main  ssh-add ~/.ssh/id_ed25519
Identity added: /Users/howoo/.ssh/id_ed25519

#
minji  ~/Desktop/work/Gasida_series/practice/my-sample-app   main  cat ~/.ssh/id_ed25519.pub

 

ssh 연결 방식 사용 테스트

# 
minji  ~/Desktop/work/Gasida_series/practice/my-sample-app   main  ssh -i ~/.ssh/id_ed25519 -T git@github.com
The authenticity of host 'github.com (20.200.245.247)' can't be established.
ED25519 key fingerprint is SHA256:+DiY3wvvV6TuJJhbpZisF/zLDA0zPMSvHdkr4UvCOqU.
This key is not known by any other names.
Are you sure you want to continue connecting (yes/no/[fingerprint])? yes
Warning: Permanently added 'github.com' (ED25519) to the list of known hosts.
Hi ktokang! You've successfully authenticated, but GitHub does not provide shell access.

#
minji  ~/Desktop/work/Gasida_series/practice/my-sample-app   main  git remote set-url origin git@github.com:ktokang/my-sample-app.git

#
minji  ~/Desktop/work/Gasida_series/practice/my-sample-app   main  git add .

#
minji  ~/Desktop/work/Gasida_series/practice/my-sample-app   main ✚  git commit -m "add readme.md file"
[main 604fcd4] add readme.md file
 3 files changed, 10 insertions(+)
 create mode 100644 readme.md

#
minji  ~/Desktop/work/Gasida_series/practice/my-sample-app  ↱ main  git push -u origin main
Enumerating objects: 6, done.
Counting objects: 100% (6/6), done.
Delta compression using up to 10 threads
Compressing objects: 100% (4/4), done.
Writing objects: 100% (5/5), 809 bytes | 809.00 KiB/s, done.
Total 5 (delta 0), reused 0 (delta 0), pack-reused 0 (from 0)
To github.com:ktokang/my-sample-app.git
   88c932e..604fcd4  main -> main
branch 'main' set up to track 'origin/main'.

 

 

Tekton Pipelines를 사용하여 git 소스코드를 복제

# Git 인증용 ssh 사설키를 base64 인코딩
minji  ~/Desktop/work/Gasida_series/practice/my-sample-app   main  SSHPK=$(cat ~/.ssh/id_ed25519 | base64 -w0

# Git 인증서버 known_hosts 값을 base64 인코딩
minji  ~/Desktop/work/Gasida_series/practice/my-sample-app   main  SSHKH=$(ssh-keyscan github.com | grep ecdsa-sha2-nistp256 | base64 -w0)

# 
minji  ~/Desktop/work/Gasida_series/practice/my-sample-app   main  cat << EOF | kubectl apply -f -
apiVersion: v1
kind: Secret
metadata:
  name: git-credentials
data:
  id_rsa: $SSHPK
  known_hosts: $SSHKH
EOF

secret/git-credentials created

#
minji  ~/Desktop/work/Gasida_series/practice/my-sample-app   main  kubectl get secret
NAME                           TYPE                 DATA   AGE
git-credentials                Opaque               2      47s
sh.helm.release.v1.pacman.v1   helm.sh/release.v1   1      24h
sh.helm.release.v1.pacman.v2   helm.sh/release.v1   1      24h
sh.helm.release.v1.pacman.v3   helm.sh/release.v1   1      24h
sh.helm.release.v1.pacman.v4   helm.sh/release.v1   1      24h

# serviceaccount 에 secret 속성 지정 및 확인
minji  ~/Desktop/work/Gasida_series/practice/my-sample-app   main  cat << EOF | kubectl apply -f -
apiVersion: v1
kind: ServiceAccount
metadata:
  name: build-bot
secrets:
  - name: git-credentials
EOF
serviceaccount/build-bot created

#
minji  ~/Desktop/work/Gasida_series/practice/my-sample-app   main  kubectl get sa
NAME        SECRETS   AGE
build-bot   1         6s
default     0         25h

 

 

파이프라인, 파이프라인 실행

# 파이프라인 파일 작성
minji  ~/Desktop/work/Gasida_series/practice/my-sample-app   main  cat << EOF | kubectl apply -f -
apiVersion: tekton.dev/v1
kind: Pipeline
metadata:
  name: my-clone-read
spec:
  description: |
    This pipeline clones a git repo, then echoes the README file to the stout.
  params:     # 매개변수 repo-url
  - name: repo-url
    type: string
    description: The git repo URL to clone from.
  workspaces: # 다운로드할 코드를 저장할 공유 볼륨인 작업 공간을 추가
  - name: shared-data
    description: |
      This workspace contains the cloned repo files, so they can be read by the
      next task.
  - name: git-credentials
    description: My ssh credentials
  tasks:      # task 정의
  - name: fetch-source
    taskRef:
      name: git-clone
    workspaces:
    - name: output
      workspace: shared-data
    - name: ssh-directory
      workspace: git-credentials
    params:
    - name: url
      value: \$(params.repo-url)
  - name: show-readme # add task
    runAfter: ["fetch-source"]
    taskRef:
      name: show-readme
    workspaces:
    - name: source
      workspace: shared-data
EOF

pipeline.tekton.dev/my-clone-read created

# 확인
minji  ~/Desktop/work/Gasida_series/practice/my-sample-app   main  tkn pipeline list
NAME            AGE              LAST RUN               STARTED          DURATION   STATUS
clone-read      1 hour ago       clone-read-run-fk5fb   51 minutes ago   19s        Succeeded

#
minji  ~/Desktop/work/Gasida_series/practice/my-sample-app   main  tkn pipeline describe
? Select pipeline:  [Use arrows to move, type to filter]
> clone-read
  my-clone-read
																			  .
																			  .
																			  .

#
minji  ~/Desktop/work/Gasida_series/practice/my-sample-app   main  kubectl get pipeline -o wide
NAME            AGE
clone-read      62m
my-clone-read   2m18s

#
minji  ~/Desktop/work/Gasida_series/practice/my-sample-app   main  kubectl get pod
NAME                                    READY   STATUS      RESTARTS        AGE
clone-read-run-fk5fb-fetch-source-pod   0/1     Completed   0               53m
pacman-576769bb86-t5scc                 1/1     Running     2 (3h54m ago)   24h
 
# show-readme task
cat << EOF | kubectl apply -f -
apiVersion: tekton.dev/v1
kind: Task
metadata:
  name: show-readme
spec:
  description: Read and display README file.
  workspaces:
  - name: source
  steps:
  - name: read
    image: alpine:latest
    script: | 
      #!/usr/bin/env sh
      cat \$(workspaces.source.path)/readme.md
EOF

# 파이프라인 실행
minji  ~/Desktop/work/Gasida_series/practice/my-sample-app   main  cat << EOF | kubectl create -f -
apiVersion: tekton.dev/v1
kind: PipelineRun
metadata:
  generateName: clone-read-run-
spec:
  pipelineRef:
    name: my-clone-read
  taskRunTemplate:
    serviceAccountName: build-bot
    podTemplate:
      securityContext:
        fsGroup: 65532
  workspaces:
  - name: shared-data
    volumeClaimTemplate:
      spec:
        accessModes:
        - ReadWriteOnce
        resources:
          requests:
            storage: 1Gi
  - name: git-credentials
    secret:
      secretName: git-credentials
  params:
  - name: repo-url
    value: git@github.com:gasida/my-sample-app.git # 제가 사용하는 것 or 자신의 private repo 지정
EOF

pipelinerun.tekton.dev/clone-read-run-cm7fg created


# 결과 확인 : 둘중에 한개는 실패함 ????
minji  ~/Desktop/work/Gasida_series/practice/my-sample-app   main  kubectl get pod,pv,pvc
NAME                                        READY   STATUS      RESTARTS        AGE
pod/clone-read-run-cm7fg-fetch-source-pod   0/1     Error       0               19s
pod/clone-read-run-fk5fb-fetch-source-pod   0/1     Completed   0               55m
pod/pacman-576769bb86-t5scc                 1/1     Running     2 (3h56m ago)   24h

NAME                                                        CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                    STORAGECLASS   VOLUMEATTRIBUTESCLASS   REASON   AGE
persistentvolume/pvc-b5399cd4-c602-4fe4-9424-ee3ea6e76cac   1Gi        RWO            Delete           Bound    default/pvc-fcf9b36883   standard       <unset>                          55m
persistentvolume/pvc-bdd52583-7010-4db6-8f82-219133ee1988   1Gi        RWO            Delete           Bound    default/pvc-964ade2b5e   standard       <unset>                          16s

NAME                                   STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   VOLUMEATTRIBUTESCLASS   AGE
persistentvolumeclaim/pvc-964ade2b5e   Bound    pvc-bdd52583-7010-4db6-8f82-219133ee1988   1Gi        RWO            standard       <unset>                 19s
persistentvolumeclaim/pvc-fcf9b36883   Bound    pvc-b5399cd4-c602-4fe4-9424-ee3ea6e76cac   1Gi        RWO            standard       <unset>

# task를 실행하는 파드에 Serivce accout 정보 확인
minji  ~/Desktop/work/Gasida_series/practice/my-sample-app   main  kubectl describe pod | grep 'Service Account'
Service Account:  build-bot
Service Account:  default
Service Account:  default

# 삭제
kubectl delete taskruns,pipelineruns.tekton.dev --all

 

 

 

+ Recent posts