(CKA) 02. Scheduler

CKA를 준비하면서 공부한 요약 내용입니다.

Node name

  • default: kubectl-scheduler
  • 위 작업이 기본적으로 pod을 node에 할당해 줌

상태 확인

  • kubectl get node

  • node = no

    1
    2
    3
    4
    
    > k get no
    NAME           STATUS   ROLES                  AGE     VERSION
    controlplane   Ready    control-plane,master   11m     v1.20.0
    node01         Ready    <none>                 9m48s   v1.20.0
    

수동 배정

  • To schedule node manually
  • NodeName
     1
     2
     3
     4
     5
     6
     7
     8
     9
    10
    
    ---
    apiVersion: v1
    kind: Pod
    metadata:
      name: nginx
    spec:
      containers:
      -  image: nginx
         name: nginx
      nodeName: node01
    

Label & Selectors

Label

  • definition

    1
    2
    3
    4
    5
    6
    7
    8
    
    apiVersion: ~~
    kind: ~~
    metadata:
        name: ~~
        labels:
            app: App1
            function: Front-end
    sepc: ~~
    

    → on metadata: labels

  • imperative

    1
    
    kubectl label node node01 key=value
    

Select

  • kubectl get pods --selector app=App1
  • several selector field
    • kubectl get pods --selector app=App1,env=dev

Replicaset

  • replica-definition.yaml
     1
     2
     3
     4
     5
     6
     7
     8
     9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    
    apiVersion: apps/v1
    kind: Replcaset
    metadata:
        name: simple-webapp
        labels:
            app: App1
            function: Front-end
    spec:
        replicas: 3
        selector:
            matchLabels:
                app: App1
        template:
            metadata:
                labels:
                    app: App1
                    function: Front-end
            spec:
                containers:
                - name: simple-webapp
                    image: simple-webapp
    
  • labels at the top are the labels of the replicaset itself.
    1
    2
    3
    4
    5
    6
    7
    
    apiVersion: apps/v1
    kind: Replcaset
    metadata:
        name: simple-webapp
        labels:
            app: App1
            function: Front-end
    
  • in order to connect the replicaset to the pod, configure the selector field
    1
    2
    3
    
        selector:
            matchLabels:
                app: App1
    

Annotations

  • used to record other detail for informatively purpose
  • eg) tool details

Taints, Tolerations

  • taints
    • set on the nodes
  • toleartions
    • set on the pods
  • taints and toleration does not tell the pod to go to a particular node
  • node to only accept pod with certain tolerations

Taints

설정

  • kubectl taint nodes node-name key=value:taint-effect
  • taint-effect :
    • NoSchedule
      • not to be scheduled, if they do not tolerate the taint
    • PreferNoSchedule
      • system will be try to avoid placing a pod on the node, not guaranteed
    • NoExecute
      • new pods will not be scheduled on the node
      • existing pods on the node will be evicted if the do not tolertae the taint

확인

  • describe & grep
    1
    2
    
    k describe node node01 |grep -i taints
    Taints:             <none>
    

제거

  • kubectl taint nodes node-name key=value:taint-effect-
    1
    
    kubectl taint nodes master/controlplane node-role.kubernetes.io/master:NoSchedule-
    

Tolerations

설정

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
apiVersion:
kind: Pod
metadata:
    name: myapp-pod
spec:
    containers:
    - name: nginx-container
    - image: nginx
    tolerations:
    - key: "app"
        operator: "Equal"
        value: "blue"
        effect: "NoSchedule"

PODs to Node

  1. Node Selector
  2. Node Affinity

Node Selector

  • Label nodes

  • use nodeSelector with label

     1
     2
     3
     4
     5
     6
     7
     8
     9
    10
    
    apiVersion:
    kind:
    metadata:
        name:
    spec:
        containers:
        - name:
            image:
        nodeSelector:
            size: Large
    
  • limitations

    • only use single label
    • there should be complex constraints

Node Affinity

  • nodeSelector vs affinity
    • two are working equally, schedule pods to Large node. img-0

설정

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
apiVersion:
kind:
metadata:
  name:
spec:
  containers:
  - name:
    image:
  affinity:
    nodeAffinity:
      requireDuringSchedulingIgnoredDuringExecution:
        nodeSelectorTerms:
        - matchExpressions:
          - key:
            operator:
            values:
  • operator
    • In
      • guarantee to schedule in values
    • NotIn
      • guarantee not to schedule in values
    • Exists
      • given key exists or not

Node Affinity Types

  • available
    • requireDuringSchedulingIgnoredDuringExecution
    • preferredDuringSchedulingIgnoredDuringExecution
  • planned
    • requireDuringSchedulingRequiredDuringExecution
  • two states in the life cycle of pod
    • DuringScheduling
      • Required
        • must be
      • Preferred
        • best to
    • DuringExecution
      • Ignored
        • if they are scheduled, will not impact them.
      • Required
        • pod will be evicted

Resource Limits

Resource Requests

설정

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
apiVersion:
kind:
metadata:
    name:
spec:
    containers:
    - name:
        image:
    resources:
        requests:
            memory: "1Gi"
            cpu: 1
  • cpu:
    • 0.1 == 100mi
    • lower: 1mi
  • memory img-1

Resources Limits

설정

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
apiVersion:
kind:
metadata:
    name:
spec:
    containers:
    - name:
        image:
    resources:
        requests:
            memory: "1Gi"
            cpu: 1
        limits:
            memory: "2Gi"
            cpu: 2
  • default
    • 1v CPU
    • 512Mi

Daemon Sets

img-2

  • one copy of the pod is always present in all nodes in the cluster

use case

  • monitoring img-3
  • kube-proxy img-4
  • networking img-5

생성

  • DaemonSet vs ReplicaSet img-6

  • definition

     1
     2
     3
     4
     5
     6
     7
     8
     9
    10
    11
    12
    13
    14
    15
    16
    
    apiVersion: apps/v1
    kind: DaemonSet
    metadata:
        name: monitoring-daemon
    spec:
        selector:
            matchLabels:
                app: monitoring-daemon
        template:
            metadata:
                labels:
                    app: monitoring-daemon
            spec:
                contaiers:
                - name: monitoring-agent
                    image: monitoring-agent
    

상태

  • kubectl get daemonsets

Static PODs

Static PODs vs DaemonSets

img-7

  • static pod
    • created directly by kubelet

Config

  1. --pod-manifeset-path=/etc/Kubernetes/manifest
  2. --config=kubeconfig.yaml
    1
    2
    
    # kubeconfig.yaml
    staticPodPath: /etc/Kubernetes/manifest
    

파일 위치

First idenity the kubelet config file:

1
2
3
4
root@controlplane:~# ps -aux | grep /usr/bin/kubelet
root      3668  0.0  1.5 1933476 63076 ?       Ssl  Mar13  16:18 /usr/bin/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf --config=/var/lib/kubelet/config.yaml --network-plugin=cni --pod-infra-container-image=k8s.gcr.io/pause:3.2
root      4879  0.0  0.0  11468  1040 pts/0    S+   00:06   0:00 grep --color=auto /usr/bin/kubelet
root@controlplane:~#

From the output we can see that the kubelet config file used is /var/lib/kubelet/config.yaml

Next, lookup the value assigned for staticPodPath:

1
2
3
root@controlplane:~# grep -i staticpod /var/lib/kubelet/config.yaml
staticPodPath: /etc/kubernetes/manifests
root@controlplane:~#

As you can see, the path configured is the /etc/kubernetes/manifests directory.

상태

  • kubectl get po -A
    • -controlplane appended pods
     1
     2
     3
     4
     5
     6
     7
     8
     9
    10
    11
    12
    
    > k get po -A
    NAMESPACE     NAME                                   READY   STATUS    RESTARTS   AGE
    kube-system   coredns-74ff55c5b-578df                1/1     Running   0          25m
    kube-system   coredns-74ff55c5b-nnjdp                1/1     Running   0          25m
    kube-system   etcd-controlplane                      1/1     Running   0          25m
    kube-system   kube-apiserver-controlplane            1/1     Running   0          25m
    kube-system   kube-controller-manager-controlplane   1/1     Running   0          25m
    kube-system   kube-flannel-ds-q7plw                  1/1     Running   0          24m
    kube-system   kube-flannel-ds-w5rnm                  1/1     Running   0          25m
    kube-system   kube-proxy-5dg8f                       1/1     Running   0          24m
    kube-system   kube-proxy-ld2qq                       1/1     Running   0          25m
    kube-system   kube-scheduler-controlplane            1/1     Running   0          25m
    
    • grep -i controlplane
    1
    2
    3
    4
    5
    
    > k get po -A |grep -i controlplane
    kube-system   etcd-controlplane                      1/1     Running   0          27m
    kube-system   kube-apiserver-controlplane            1/1     Running   0          27m
    kube-system   kube-controller-manager-controlplane   1/1     Running   0          27m
    kube-system   kube-scheduler-controlplane            1/1     Running   0          27m
    

삭제

First, let’s identify the node in which the pod called static-greenbox is created. To do this, run:

1
2
3
root@controlplane:~# kubectlget pods --all-namespaces -o wide  | grep static-greenbox
defaultstatic-greenbox-node01                 1/1Running   0          19s     10.244.1.2   node01       <none>           <none>
root@controlplane:~#

From the result of this command, we can see that the pod is running on node01.

Next, SSH to node01 and identify the path configured for static pods in this node.

Important: The path need not be /etc/kubernetes/manifests.

Make sure to check the path configured in the kubelet configuration file.

1
2
3
4
5
6
7
root@controlplane:~# ssh node01
root@node01:~# ps -ef |  grep /usr/bin/kubelet
root       752   654  0 00:30 pts/0    00:00:00 grep --color=auto /usr/bin/kubelet
root     28567     1  0 00:22 ?        00:00:11 /usr/bin/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf --config=/var/lib/kubelet/config.yaml --network-plugin=cni --pod-infra-container-image=k8s.gcr.io/pause:3.2
root@node01:~# grep -i staticpod /var/lib/kubelet/config.yaml
staticPodPath: /etc/just-to-mess-with-you
root@node01:~#

Here the staticPodPath is /etc/just-to-mess-with-you

Navigate to this directory and delete the YAML file:

1
2
3
4
root@node01:/etc/just-to-mess-with-you# ls
greenbox.yaml
root@node01:/etc/just-to-mess-with-you# rm -rf greenbox.yaml
root@node01:/etc/just-to-mess-with-you#

Exit out of node01 using CTRL + D or type exit. You should return to the controlplane node. Check if the static-greenbox pod has been deleted:

1
2
root@controlplane:~# kubectl get pods --all-namespaces -o wide  | grep static-greenbox
root@controlplane:~#

Multiple Schedulers

생성

  • cli img-8
  • kubeadm img-9
    • --leader-elect
      • used when multiple copies of the scheduler running on different master nodes
      • who will lead activity
    • --scheduler-name
      • name of scheduler img-10
    • --lock-object-name

사용

  • definition
    1
    2
    3
    4
    5
    6
    7
    8
    9
    
    apiVersion: v1
    kind: Pod
    metadata:
        name: nginx
    spec:
        containers:
        - image: nginx
            name: nginx
        schedulerName: my-custom-scheduler
    

상태

  • kubectl get events
  • kubectl logs my-custom-scheduler --n kube-system
Built with Hugo
Theme Stack designed by Jimmy