쿠버네티스 교육

02. Kube 교육 - k3s, kubespray 설치

Jerry_이정훈 2021. 6. 4. 16:36
728x90

실습

  • k3s 설치
  • kubespray 설치 
  • 로컬 PC 원격 k3s, kubespray cluster 등록 

Why 나작클

설치는 k3s, kubespray, kubeadm, minikube 등 본인이 편한 툴을 사용하시면 됩니다. 저는 ansible이 익숙해서 kubespray로 진행 하였습니다. 중요한 건 나만의 작은 클라우드(또는 클러스터, 나작클)가 있는 것 입니다. 언제든지 편하게 뭉개고 다시 만들 수 있고 여러 테스트를 다른 사람 눈치 보지 않고 집에서건 카페에서건 사용 할 수 있는 클러스터가 필요합니다. Kube는 복잡한 놈이라 여러 명령어를 클러스터 에러 또는 삭제 부담없이 편하게 해 보셔야 알 수 있는 놈입니다. 

없으시면 회사에 요청해서라도 꼭 만드시길 바랍니다. 요즈음 메모리 가격이 낮아져서 128G, 256G 메모리 서버, 아마도 600~800만원이면 구매 하실 수 있습니다. 개인 당 4G 메모리 VM 3개씩만 있어도 실제 운영 환경과 아주 유사한 Kube 클러스터 가질 수 있습니다. Kube 또다른 장점이 클러스터 사이즈 및 환경 등에 관계 없이 거의 동일한 기능을 가지는 것 입니다.

 

개인적으로 설치는 아무 툴이나 혹은 EKS 등 이미 설치되어 있는 클러스터를 이용하셔도 무방하다 생각합니다. 설치하느라 시간 쓰는 건 너무 아까운 것 같습니다. 설치 이 외 할 것이 참 많은게 kube 입니다. 저는 예전에 openstack 설치하는데 힘 다쓰고 실패해서 결국 openstack은 포기했는데 지금 생각해보면 참 어리석었던 것 같습니다.  저처럼 어리석지 않으셨으면 합니다.

 

k3s 설치 

k3s 이용하시면 단일 VM으로 빠르게 kube cluster 설치 가능합니다. k8s를 줄인 k3s이라고 생각하면 이해가 빠릅니다.

What is k3s
Lightweight Kubernetes. Easy to install, half the memory, all in a binary of less than 100 MB. (공식 홈페이지 참조)

약 2분 내외로 Kube cluster 설치가 완료됩니다. (허무할 정도로 짧습니다.) 주의 사항은 설치 전 방화벽 설정만 stop/disable 하시면 됩니다. 아래 설치 로그는 centos7 기준 입니다.

[root@localhost ~]# curl -sfL https://get.k3s.io | sh -
[INFO]  Finding release for channel stable
[INFO]  Using v1.20.7+k3s1 as release
[INFO]  Downloading hash https://github.com/k3s-io/k3s/releases/download/v1.20.7+k3s1/sha256sum-amd64.txt
[INFO]  Downloading binary https://github.com/k3s-io/k3s/releases/download/v1.20.7+k3s1/k3s
[INFO]  Verifying binary download
[INFO]  Installing k3s to /usr/local/bin/k3s
Loaded plugins: fastestmirror, langpacks
Loading mirror speeds from cached hostfile
 * base: mirror.kakao.com
 * extras: mirror.kakao.com
 * updates: mirror.kakao.com
base                                                                                                                           | 3.6 kB  00:00:00
extras                                                                                                                         | 2.9 kB  00:00:00
updates                                                                                                                        | 2.9 kB  00:00:00
(1/4): base/7/x86_64/group_gz                                                                                                  | 153 kB  00:00:00
(2/4): extras/7/x86_64/primary_db                                                                                              | 236 kB  00:00:00
(3/4): updates/7/x86_64/primary_db                                                                                             | 8.0 MB  00:00:00
(4/4): base/7/x86_64/primary_db                                                                                                | 6.1 MB  00:00:01
Resolving Dependencies
--> Running transaction check
---> Package yum-utils.noarch 0:1.1.31-52.el7 will be updated
---> Package yum-utils.noarch 0:1.1.31-54.el7_8 will be an update
--> Finished Dependency Resolution

Dependencies Resolved

======================================================================================================================================================
 Package                             Arch                             Version                                    Repository                      Size
======================================================================================================================================================
Updating:
 yum-utils                           noarch                           1.1.31-54.el7_8                            base                           122 k

Transaction Summary
======================================================================================================================================================
Upgrade  1 Package

Total download size: 122 k
Downloading packages:
No Presto metadata available for base
yum-utils-1.1.31-54.el7_8.noarch.rpm                                                                                           | 122 kB  00:00:00
Running transaction check
Running transaction test
Transaction test succeeded
Running transaction
Warning: RPMDB altered outside of yum.
  Updating   : yum-utils-1.1.31-54.el7_8.noarch                                                                                                   1/2
  Cleanup    : yum-utils-1.1.31-52.el7.noarch                                                                                                     2/2
  Verifying  : yum-utils-1.1.31-54.el7_8.noarch                                                                                                   1/2
  Verifying  : yum-utils-1.1.31-52.el7.noarch                                                                                                     2/2

Updated:
  yum-utils.noarch 0:1.1.31-54.el7_8

Complete!
Loaded plugins: fastestmirror, langpacks
Loaded plugins: fastestmirror, langpacks
Loading mirror speeds from cached hostfile
 * base: mirror.kakao.com
 * extras: mirror.kakao.com
 * updates: mirror.kakao.com
rancher-k3s-common-stable                                                                                                      | 2.9 kB  00:00:00
rancher-k3s-common-stable/primary_db                                                                                           | 2.2 kB  00:00:00
Resolving Dependencies
--> Running transaction check
---> Package k3s-selinux.noarch 0:0.3-0.el7 will be installed
--> Processing Dependency: container-selinux >= 2.107-3 for package: k3s-selinux-0.3-0.el7.noarch
--> Running transaction check
---> Package container-selinux.noarch 2:2.119.2-1.911c772.el7_8 will be installed
--> Finished Dependency Resolution

Dependencies Resolved

======================================================================================================================================================
 Package                           Arch                   Version                                     Repository                                 Size
======================================================================================================================================================
Installing:
 k3s-selinux                       noarch                 0.3-0.el7                                   rancher-k3s-common-stable                  14 k
Installing for dependencies:
 container-selinux                 noarch                 2:2.119.2-1.911c772.el7_8                   extras                                     40 k

Transaction Summary
======================================================================================================================================================
Install  1 Package (+1 Dependent package)

Total download size: 53 k
Installed size: 123 k
Downloading packages:
(1/2): container-selinux-2.119.2-1.911c772.el7_8.noarch.rpm                                                                    |  40 kB  00:00:00
warning: /var/cache/yum/x86_64/7/rancher-k3s-common-stable/packages/k3s-selinux-0.3-0.el7.noarch.rpm: Header V4 RSA/SHA1 Signature, key ID e257814a: NOKEY
Public key for k3s-selinux-0.3-0.el7.noarch.rpm is not installed
(2/2): k3s-selinux-0.3-0.el7.noarch.rpm                                                                                        |  14 kB  00:00:00
------------------------------------------------------------------------------------------------------------------------------------------------------
Total                                                                                                                  56 kB/s |  53 kB  00:00:00
Retrieving key from https://rpm.rancher.io/public.key
Importing GPG key 0xE257814A:
 Userid     : "Rancher (CI) <ci@rancher.com>"
 Fingerprint: c8cf f216 4551 26e9 b9c9 18be 925e a29a e257 814a
 From       : https://rpm.rancher.io/public.key
Running transaction check
Running transaction test
Transaction test succeeded
Running transaction
  Installing : 2:container-selinux-2.119.2-1.911c772.el7_8.noarch                                                                                 1/2
  Installing : k3s-selinux-0.3-0.el7.noarch                                                                                                       2/2
  Verifying  : k3s-selinux-0.3-0.el7.noarch                                                                                                       1/2
  Verifying  : 2:container-selinux-2.119.2-1.911c772.el7_8.noarch                                                                                 2/2

Installed:
  k3s-selinux.noarch 0:0.3-0.el7

Dependency Installed:
  container-selinux.noarch 2:2.119.2-1.911c772.el7_8

Complete!
[INFO]  Creating /usr/local/bin/kubectl symlink to k3s
[INFO]  Creating /usr/local/bin/crictl symlink to k3s
[INFO]  Creating /usr/local/bin/ctr symlink to k3s
[INFO]  Creating killall script /usr/local/bin/k3s-killall.sh
[INFO]  Creating uninstall script /usr/local/bin/k3s-uninstall.sh
[INFO]  env: Creating environment file /etc/systemd/system/k3s.service.env
[INFO]  systemd: Creating service file /etc/systemd/system/k3s.service
[INFO]  systemd: Enabling k3s unit
Created symlink from /etc/systemd/system/multi-user.target.wants/k3s.service to /etc/systemd/system/k3s.service.
[INFO]  systemd: Starting k3s

설치가 완료되면 아래와 같이 POD를 확인 할 수 있습니다. (root 계정으로 실행 합니다.)

[root@localhost ~]# kubectl get pods -A
NAMESPACE     NAME                                      READY   STATUS      RESTARTS   AGE
kube-system   coredns-854c77959c-6k2g2                  1/1     Running     0          16h
kube-system   local-path-provisioner-5ff76fc89d-nzd5v   1/1     Running     8          16h
kube-system   metrics-server-86cbb8457f-2zdgv           1/1     Running     8          16h
kube-system   helm-install-traefik-5bmff                0/1     Completed   8          16h
kube-system   svclb-traefik-7dj6b                       2/2     Running     0          15h
kube-system   traefik-6f9cbd9bd4-8mgqz                  1/1     Running     0          15h

다른 kube cluster 와 다르게 실행되는 POD 갯수가 작습니다. 일반 cluster의 kube-system 네임스페이스 POD 리스트는 아래와 같습니다. 

[spkr@erdia22 ~ (spkn01:kube-system)]$ kgp
NAME                                      READY   STATUS    RESTARTS   AGE    IP              NODE    NOMINATED NODE   READINESS GATES
calico-kube-controllers-8b5ff5d58-79lsf   1/1     Running   0          25d    172.17.16.152   node2   <none>           <none>
calico-node-8h4hq                         1/1     Running   2          117d   172.17.16.153   node3   <none>           <none>
calico-node-mn52j                         1/1     Running   2          117d   172.17.16.152   node2   <none>           <none>
calico-node-njlt4                         1/1     Running   3          117d   172.17.16.151   node1   <none>           <none>
coredns-85967d65-49tzn                    1/1     Running   0          21d    10.233.92.173   node3   <none>           <none>
coredns-85967d65-nwpbq                    1/1     Running   2          117d   10.233.96.241   node2   <none>           <none>
dns-autoscaler-5b7b5c9b6f-zvc4x           1/1     Running   2          117d   10.233.96.233   node2   <none>           <none>
kube-apiserver-node1                      1/1     Running   3          117d   172.17.16.151   node1   <none>           <none>
kube-apiserver-node2                      1/1     Running   183        117d   172.17.16.152   node2   <none>           <none>
kube-controller-manager-node1             1/1     Running   7          117d   172.17.16.151   node1   <none>           <none>
kube-controller-manager-node2             1/1     Running   319        117d   172.17.16.152   node2   <none>           <none>
kube-proxy-nwjbw                          1/1     Running   2          116d   172.17.16.153   node3   <none>           <none>
kube-proxy-qgjwf                          1/1     Running   2          116d   172.17.16.152   node2   <none>           <none>
kube-proxy-vz8sl                          1/1     Running   3          116d   172.17.16.151   node1   <none>           <none>
kube-scheduler-node1                      1/1     Running   7          117d   172.17.16.151   node1   <none>           <none>
kube-scheduler-node2                      1/1     Running   301        117d   172.17.16.152   node2   <none>           <none>
nginx-proxy-node3                         1/1     Running   7          117d   172.17.16.153   node3   <none>           <none>
nodelocaldns-4dqjm                        1/1     Running   3          117d   172.17.16.153   node3   <none>           <none>
nodelocaldns-7tjjc                        1/1     Running   3          117d   172.17.16.152   node2   <none>           <none>
nodelocaldns-9lgzs                        1/1     Running   3          117d   172.17.16.151   node1   <none>           <none>

원격에서 관리하기 위해서는 kube config 파일을 로컬 PC로 copy 합니다. 먼저 k3s 설정 파일 위치를 확인 합니다. 일반 Kube 클러스터처럼 {HOME}/.kube/config  파일이 아니라 rancher 설정 파일에 클러스터 정보 파일(kube config)이 있습니다. 

[root@localhost ~]# cat /etc/rancher/k3s/k3s.yaml
apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUJkakNDQVIyZ0F3SUJBZ0lCQURBS0JnZ3Foa2pPUFFRREFqQWpNU0V3SHdZRFZRUUREQmhyTTNNdGMyVnkKZG1WeUxXTmhRREUyTWpJME5EazFNamN3SGhjTk1qRXdOVE14TURneU5USTNXaGNOTXpFd05USTVNRGd5TlRJMwpXakFqTVNFd0h3WURWUVFEREJock0zTXRjMlZ5ZG1WeUxXTmhRREUyTWpJME5EazFNamN3V1RBVEJnY3Foa2pPClBRSUJCZ2dxaGtqT1BRTUJCd05DQUFTRHZCeXJPRThyNmNLMTV3OWRnVUk3UGJUK2oyenB1ZFVRK2pwa2xkN0QKQi8wVDF6Ti9YSHI1V3pnRzR3YUpIYUdUSXJrRGZkL2dMTkp2L2JXc29ZWGZvMEl3UURBT0JnTlZIUThCQWY4RQpCQU1DQXFRd0R3WURWUjBUQVFIL0JBVXdBd0VCL3pBZEJnTlZIUTRFRmdRVVlrbm92RzNpSkRuY0FTSGw0eGxuCnRwdlM1VnN3Q2dZSUtvWkl6ajBFQXdJRFJ3QXdSQUlnT3VweWJMT3VRM2RQYk9hN2E4M3psYUVzcm9pNmdwOFcKa3VFb3VmS0pRbThDSUQ0ZGdIZGF5bzJwQ0xBUEZDNXF5Y0t3K2hiQ05Ib2c5RW81akduUWhUTjAKLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=
    server: https://127.0.0.1:6443
  name: default
(이하 생략)

해당 파일을 로컬 PC의 자기 계정 ~/.kube/config 파일에 추가 합니다. 

  • IP는 127.0.0.1로 되어 있는데 원격에서 접속 가능한 node IP로 변경합니다.
  • context, cluster, user 이름은 다른 cluster와 구분 할 수 있도록 임의로 지정합니다. 



이제 원격에서도 정상적으로 클러스터 확인이 가능합니다. 

[spkr@erdia22 ~ (spkn01:kube-system)]$ kctx k3s
Switched to context "k3s".

[spkr@erdia22 ~ (k3s:default)]$ kgn
NAME                    STATUS   ROLES                  AGE   VERSION        INTERNAL-IP     EXTERNAL-IP   OS-IMAGE                KERNEL-VERSION                CONTAINER-RUNTIME
localhost.localdomain   Ready    control-plane,master   16h   v1.20.7+k3s1   172.17.29.166   <none>        CentOS Linux 7 (Core)   3.10.0-1062.12.1.el7.x86_64   containerd://1.4.4-k3s1

(kctx 툴 사용법은 다음 포스팅에서 설명 드리 겠습니다.)

 

이제 나만의 kube cluster를 사용하실 수 있게 되었습니다. 개인적으로 저는 회사에 여유 VM이 있어서 k3s 보다는 일반(?) cluster를 3개 돌리면서 테스트 하고 있습니다. VM 여유가 없으시면 단일 VM(혹은 PC에) k3s 사용하셔도 무방 할 것 같습니다. 

 

kubespray 설치 

운영 혹은 스테이징, 테스트 용도 Kube cluster 설치를 위하여 kubespray를 이용합니다. kubespray을 이용하기 위해서는 ansible 기본 지식이 있으면 (아주 조금) 유리합니다. 공식 홈페이지 가이드에 따라 설치를 진행 합니다. 

 

kubespray Github : https://github.com/kubernetes-sigs/kubespray

 

kubernetes-sigs/kubespray

Deploy a Production Ready Kubernetes Cluster. Contribute to kubernetes-sigs/kubespray development by creating an account on GitHub.

github.com

[spkr@erdia22 kubespray-1.21 (k3s:default)]$ sudo pip3 install -r requirements.txt 
The directory '/home/spkr/.cache/pip/http' or its parent directory is not owned by the current user and the cache has been disabled. Please check the permissions and owner of that directory. If executing pip with sudo, you may want sudo's -H flag.
The directory '/home/spkr/.cache/pip' or its parent directory is not owned by the current user and caching wheels has been disabled. check the permissions and owner of that directory. If executing pip with sudo, you may want sudo's -H flag.
Collecting ansible==2.9.20 (from -r requirements.txt (line 1))
  Downloading https://files.pythonhosted.org/packages/ed/53/01fe1f54d8d408306b72c961e573223a0d95eca26d6c3b59d57a9c64e4ef/ansible-2.9.20.tar.gz (14.3MB)
    100% |████████████████████████████████| 14.3MB 84kB/s 
Collecting cryptography==2.8 (from -r requirements.txt (line 2))
  Downloading https://files.pythonhosted.org/packages/45/73/d18a8884de8bffdcda475728008b5b13be7fbef40a2acc81a0d5d524175d/cryptography-2.8-cp34-abi3-manylinux1_x86_64.whl (2.3MB)
    100% |████████████████████████████████| 2.3MB 389kB/s 
Collecting jinja2==2.11.3 (from -r requirements.txt (line 3))
  Downloading https://files.pythonhosted.org/packages/7e/c2/1eece8c95ddbc9b1aeb64f5783a9e07a286de42191b7204d67b7496ddf35/Jinja2-2.11.3-py2.py3-none-any.whl (125kB)
    100% |████████████████████████████████| 133kB 5.1MB/s 
Requirement already satisfied: netaddr==0.7.19 in /usr/local/lib/python3.7/dist-packages (from -r requirements.txt (line 4))
Requirement already satisfied: pbr==5.4.4 in /usr/local/lib/python3.7/dist-packages (from -r requirements.txt (line 5))
Requirement already satisfied: jmespath==0.9.5 in /usr/local/lib/python3.7/dist-packages (from -r requirements.txt (line 6))
Requirement already satisfied: ruamel.yaml==0.16.10 in /usr/local/lib/python3.7/dist-packages (from -r requirements.txt (line 7))
Requirement already satisfied: MarkupSafe==1.1.1 in /home/spkr/.local/lib/python3.7/site-packages (from -r requirements.txt (line 8))
Requirement already satisfied: PyYAML in /home/spkr/.local/lib/python3.7/site-packages (from ansible==2.9.20->-r requirements.txt (line 1))
Requirement already satisfied: cffi!=1.11.3,>=1.8 in /home/spkr/.local/lib/python3.7/site-packages (from cryptography==2.8->-r requirements.txt (line 2))
Requirement already satisfied: six>=1.4.1 in /home/spkr/.local/lib/python3.7/site-packages (from cryptography==2.8->-r requirements.txt (line 2))
Requirement already satisfied: ruamel.yaml.clib>=0.1.2; platform_python_implementation == "CPython" and python_version < "3.9" in /home/spkr/.local/lib/python3.7/site-packages (from ruamel.yaml==0.16.10->-r requirements.txt (line 7))
Requirement already satisfied: pycparser in /home/spkr/.local/lib/python3.7/site-packages (from cffi!=1.11.3,>=1.8->cryptography==2.8->-r requirements.txt (line 2))
Installing collected packages: jinja2, cryptography, ansible
  Found existing installation: Jinja2 2.11.1
    Uninstalling Jinja2-2.11.1:
      Successfully uninstalled Jinja2-2.11.1
  Found existing installation: cryptography 3.3.1
    Uninstalling cryptography-3.3.1:
      Successfully uninstalled cryptography-3.3.1
  Found existing installation: ansible 2.9.16
    Uninstalling ansible-2.9.16:
      Successfully uninstalled ansible-2.9.16
  Running setup.py install for ansible ... done
Successfully installed ansible-2.9.20 cryptography-2.8 jinja2-2.11.3

저는 hosts.yml 파일 생성 시 아래와 같이 에러가 발생하여 수동으로 hosts.yml 파일을 생성 하였습니다.

[spkr@erdia22 kubespray-1.21 (k3s:default)]$ CONFIG_FILE=inventory/mycluster/hosts.yaml python3 contrib/inventory_builder/inventory.py ${IPS[@]}
Traceback (most recent call last):
  File "contrib/inventory_builder/inventory.py", line 40, in <module>
    from ruamel.yaml import YAML
ModuleNotFoundError: No module named 'ruamel'

inventory/mycluster/hosts.yml 파일 내용 

먼저 hosts 파일에 등록합니다.

[spkr@erdia22 ~ (kspray:default)]$ cat /etc/hosts |grep ksp
172.17.28.171   ksp1
172.17.28.172   ksp2
172.17.28.173   ksp3

vi inventory/mycluster/hosts.yml 
all:
  hosts:
    ksp1:
      ansible_host: ksp1
    ksp2:
      ansible_host: ksp2
    ksp3:
      ansible_host: ksp3
  children:
    kube-master:
      hosts:
        ksp1:
        ksp2:
        node3:
    kube-node:
      hosts:
        ksp1:
        ksp2:
        ksp3:
    etcd:
      hosts:
        ksp1:
        ksp2:
        ksp3:
    k8s-cluster:
      children:
        kube-master:
        kube-node:
    calico-rr:
      hosts: {}

다음으로 설치를 위한 변수 수정 내역은 아래와 같습니다.

vi inventory/mycluster/group_vars/k8s_cluster/k8s_cluster.yml

# configure arp_ignore and arp_announce to avoid answering ARP queries from kube-ipvs0 interface
# must be set to true for MetalLB to work
kube_proxy_strict_arp: true

## docker for docker, crio for cri-o and containerd for containerd.
container_manager: crio

# audit log for kubernetes
kubernetes_audit: true

container run time을 docker에서 crio로 변경하면 inventory/mycluster/group_vars/etcd.yml 파일을 아래와 같이 ‘host’로 수정하셔야 합니다.

## Settings for etcd deployment type
etcd_deployment_type: host

이제 설치를 진행 하겠습니다. 저는 전체 설치 소요 시간이 12분으로 꽤 걸렸습니다. 인내를 가지면 됩니다. ^^

[spkr@erdia22 kubespray-1.21 (kspray:default)]$ ansible-playbook -i inventory/mycluster/hosts.yml --become --become-user=root cluster.yml 

(생략) 

PLAY RECAP ************************************************************************************************************************************************************************************************************
ksp1                       : ok=586  changed=130  unreachable=0    failed=0    skipped=1149 rescued=0    ignored=1   
ksp2                       : ok=520  changed=118  unreachable=0    failed=0    skipped=995  rescued=0    ignored=0   
ksp3                       : ok=522  changed=119  unreachable=0    failed=0    skipped=993  rescued=0    ignored=0   
localhost                  : ok=3    changed=0    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0   

Tuesday 01 June 2021  10:37:43 +0900 (0:00:00.072)       0:12:39.482 ********** 
=============================================================================== 
container-engine/cri-o : Install cri-o packages --------------------------------------------------------------------------------------------------------------------------------------------------------------- 53.44s
kubernetes/control-plane : Joining control plane node to the cluster. ----------------------------------------------------------------------------------------------------------------------------------------- 51.13s
kubernetes/preinstall : Install packages requirements --------------------------------------------------------------------------------------------------------------------------------------------------------- 34.78s
kubernetes/control-plane : kubeadm | Initialize first master -------------------------------------------------------------------------------------------------------------------------------------------------- 31.03s
download_container | Download image if required --------------------------------------------------------------------------------------------------------------------------------------------------------------- 21.53s
download_container | Download image if required --------------------------------------------------------------------------------------------------------------------------------------------------------------- 19.79s
download_container | Download image if required --------------------------------------------------------------------------------------------------------------------------------------------------------------- 18.56s
download_file | Download item --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 18.40s
download_container | Download image if required --------------------------------------------------------------------------------------------------------------------------------------------------------------- 15.20s
download_container | Download image if required --------------------------------------------------------------------------------------------------------------------------------------------------------------- 14.58s
download_container | Download image if required --------------------------------------------------------------------------------------------------------------------------------------------------------------- 14.21s
kubernetes/control-plane : Master | wait for kube-scheduler --------------------------------------------------------------------------------------------------------------------------------------------------- 13.25s
download_file | Download item --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 12.19s
download_file | Download item --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 11.07s
download_container | Download image if required --------------------------------------------------------------------------------------------------------------------------------------------------------------- 10.65s
Gen_certs | Write etcd member and admin certs to other etcd nodes --------------------------------------------------------------------------------------------------------------------------------------------- 10.16s
Gen_certs | Write etcd member and admin certs to other etcd nodes ---------------------------------------------------------------------------------------------------------------------------------------------- 9.99s
download_container | Download image if required ---------------------------------------------------------------------------------------------------------------------------------------------------------------- 9.82s
download_file | Download item ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 8.23s
download_file | Download item ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 7.67s

설치 완료 후 개인 PC에서 관리를 위하여 k3s와 동일하게 원격 서버의 root ./kube/config 설정을 로컬 PC로 copy 합니다.

[root@ksp1 ~]# cat ~/.kube/config
apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUM1ekNDQWMrZ0F3SUJBZ0lCQURBTkJna3Foa2lHOXcwQkFRc0ZBREFWTVJNd0VRWURWUVFERXdwcmRXSmwKY201bGRHVnpNQjRYRFRJeE1EWXdNVEF4TXpReU5Wb1hEVE14TURVek1EQXhNelF5TlZvd0ZURVRNQkVHQTFVRQpBeE1LYTNWaVpYSnVaWFJsY3pDQ0FTSXdEUVlKS29aSWh2Y05BUUVCQlFBRGdnRVBBRENDQVFvQ2dnRUJBS1NuCmVkNllwejBqZkZBaHVDOGlqalp2Wmx1TXpXQm9UY0llY0JKU0lSMUgvNGV1N3kvV213N0h1SkRScytVdVZuM1gKanFQcllraU82bzhzeEVxY0NXR214VW5wNkc3NzJhemV6WGwyWTMxUlREWjAwWkJVMTR2K2pDMm9LSEVTTTVMMQpYT1AvaFZoNllZTlVSRXhYdURUQXdvaDdhTDYyK3ROUmhJM3laVExBb2NQMHVFL2h5dnlyY215Um1IQk5MTDBmCnErSkIrbWNDSWt4YXJsOEJ4YnNncStyZ0NWK1VkTzdHVXhxOE5iaU9TUlN4Z0VGMjdDaVpDb0tqVmZmVis2N1YKSmE1eUwrRTg1dEwxV09xV3VtdnRseitPWVFIWWp3MlJUVk1qQUMyMjN2R0xTdUNzTHRRNFRUa2
(생략)

로컬 PC Copy (자세한 내용은 동일하여 생략합니다.)

이제, 로컬 PC에서 정상적으로 전체 POD 현황 확인 가능합니다. 

[spkr@erdia22 ~ (kspray:default)]$ kgpa
NAMESPACE     NAME                                       READY   STATUS    RESTARTS   AGE   IP              NODE   NOMINATED NODE   READINESS GATES
kube-system   calico-kube-controllers-5b4d7b4594-rb9tg   1/1     Running   0          20m   172.17.28.173   ksp3   <none>           <none>
kube-system   calico-node-hjjpx                          1/1     Running   0          20m   172.17.28.172   ksp2   <none>           <none>
kube-system   calico-node-q64rn                          1/1     Running   0          20m   172.17.28.171   ksp1   <none>           <none>
kube-system   calico-node-sg2ht                          1/1     Running   0          20m   172.17.28.173   ksp3   <none>           <none>
kube-system   coredns-8474476ff8-g5fhv                   1/1     Running   0          20m   10.233.79.1     ksp2   <none>           <none>
kube-system   coredns-8474476ff8-vxttw                   1/1     Running   0          20m   10.233.87.1     ksp3   <none>           <none>
kube-system   dns-autoscaler-7df78bfcfb-jtbhj            1/1     Running   0          20m   10.233.127.1    ksp1   <none>           <none>
kube-system   kube-apiserver-ksp1                        1/1     Running   0          22m   172.17.28.171   ksp1   <none>           <none>
kube-system   kube-apiserver-ksp2                        1/1     Running   0          22m   172.17.28.172   ksp2   <none>           <none>
kube-system   kube-apiserver-ksp3                        1/1     Running   0          21m   172.17.28.173   ksp3   <none>           <none>
kube-system   kube-controller-manager-ksp1               1/1     Running   0          22m   172.17.28.171   ksp1   <none>           <none>
kube-system   kube-controller-manager-ksp2               1/1     Running   0          22m   172.17.28.172   ksp2   <none>           <none>
kube-system   kube-controller-manager-ksp3               1/1     Running   0          21m   172.17.28.173   ksp3   <none>           <none>
kube-system   kube-proxy-64bkk                           1/1     Running   0          20m   172.17.28.171   ksp1   <none>           <none>
kube-system   kube-proxy-7xs4q                           1/1     Running   0          20m   172.17.28.172   ksp2   <none>           <none>
kube-system   kube-proxy-gm4dh                           1/1     Running   0          20m   172.17.28.173   ksp3   <none>           <none>
kube-system   kube-scheduler-ksp1                        1/1     Running   0          22m   172.17.28.171   ksp1   <none>           <none>
kube-system   kube-scheduler-ksp2                        1/1     Running   0          22m   172.17.28.172   ksp2   <none>           <none>
kube-system   kube-scheduler-ksp3                        1/1     Running   0          21m   172.17.28.173   ksp3   <none>           <none>
kube-system   nodelocaldns-ct76v                         1/1     Running   0          20m   172.17.28.171   ksp1   <none>           <none>
kube-system   nodelocaldns-r4gn6                         1/1     Running   0          20m   172.17.28.173   ksp3   <none>           <none>
kube-system   nodelocaldns-t2wf2                         1/1     Running   0          20m   172.17.28.172   ksp2   <none>           <none>

[spkr@erdia22 ~ (kspray:default)]$ kgn
NAME   STATUS   ROLES                  AGE   VERSION   INTERNAL-IP     EXTERNAL-IP   OS-IMAGE                KERNEL-VERSION                CONTAINER-RUNTIME
ksp1   Ready    control-plane,master   23m   v1.21.1   172.17.28.171   <none>        CentOS Linux 7 (Core)   3.10.0-1062.12.1.el7.x86_64   cri-o://1.21.0
ksp2   Ready    control-plane,master   23m   v1.21.1   172.17.28.172   <none>        CentOS Linux 7 (Core)   3.10.0-1062.12.1.el7.x86_64   cri-o://1.21.0
ksp3   Ready    control-plane,master   22m   v1.21.1   172.17.28.173   <none>        CentOS Linux 7 (Core)   3.10.0-1062.12.1.el7.x86_64   cri-o://1.21.0

이제 정상적으로 kube 클러스터 설치가 완료 되었습니다. 

 

참조

 

k3s 시리즈 - 간단하게 Kubernetes 환경 구축하기

Google Cloud Summit 등에서 여러 번 했던 발표(관련 슬라이드 보기)를 들으신 분이라면 아시겠지만, Shakr에서는 2016년부터 Kubernetes를 프로덕션 서비스에 도입하여 사용중입니다. Kubernetes는 도입하고

si.mpli.st

 

반응형