728x90
Welcome to the KodeKloud Hands-On lab
__ ______ ____ ________ __ __ ____ __ ______
/ //_/ __ \/ __ \/ ____/ //_// / / __ \/ / / / __ \
/ ,< / / / / / / / __/ / ,< / / / / / / / / / / / /
/ /| / /_/ / /_/ / /___/ /| |/ /___/ /_/ / /_/ / /_/ /
/_/ |_\____/_____/_____/_/ |_/_____/\____/\____/_____/
All rights reserved
controlplane ~ ➜ k get pods
No resources found in default namespace.
controlplane ~ ➜ k get replicaset
No resources found in default namespace.
controlplane ~ ➜ k get deploy
No resources found in default namespace.
controlplane ~ ➜ k get deployment
No resources found in default namespace.
controlplane ~ ➜ k get deploy
NAME READY UP-TO-DATE AVAILABLE AGE
frontend-deployment 0/4 4 0 7s
controlplane ~ ➜ k get replicaset
NAME DESIRED CURRENT READY AGE
frontend-deployment-649fb4c7c 4 4 0 24s
controlplane ~ ➜ k get pods
NAME READY STATUS RESTARTS AGE
frontend-deployment-649fb4c7c-fp7nh 0/1 ErrImagePull 0 37s
frontend-deployment-649fb4c7c-k77p9 0/1 ImagePullBackOff 0 37s
frontend-deployment-649fb4c7c-mjmnf 0/1 ErrImagePull 0 37s
frontend-deployment-649fb4c7c-tqr8w 0/1 ErrImagePull 0 37s
controlplane ~ ➜ k get pods
NAME READY STATUS RESTARTS AGE
frontend-deployment-649fb4c7c-fp7nh 0/1 ImagePullBackOff 0 118s
frontend-deployment-649fb4c7c-k77p9 0/1 ErrImagePull 0 118s
frontend-deployment-649fb4c7c-mjmnf 0/1 ErrImagePull 0 118s
frontend-deployment-649fb4c7c-tqr8w 0/1 ImagePullBackOff 0 118s
controlplane ~ ➜ k get deploy
NAME READY UP-TO-DATE AVAILABLE AGE
frontend-deployment 0/4 4 0 2m21s
controlplane ~ ➜ k describe deploy frontend-deployment
Name: frontend-deployment
Namespace: default
CreationTimestamp: Tue, 21 Jan 2025 08:01:30 +0000
Labels: <none>
Annotations: deployment.kubernetes.io/revision: 1
Selector: name=busybox-pod
Replicas: 4 desired | 4 updated | 4 total | 0 available | 4 unavailable
StrategyType: RollingUpdate
MinReadySeconds: 0
RollingUpdateStrategy: 25% max unavailable, 25% max surge
Pod Template:
Labels: name=busybox-pod
Containers:
busybox-container:
Image: busybox888
Port: <none>
Host Port: <none>
Command:
sh
-c
echo Hello Kubernetes! && sleep 3600
Environment: <none>
Mounts: <none>
Volumes: <none>
Node-Selectors: <none>
Tolerations: <none>
Conditions:
Type Status Reason
---- ------ ------
Available False MinimumReplicasUnavailable
Progressing True ReplicaSetUpdated
OldReplicaSets: <none>
NewReplicaSet: frontend-deployment-649fb4c7c (4/4 replicas created)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal ScalingReplicaSet 3m9s deployment-controller Scaled up replica set frontend-deployment-649fb4c7c to 4
controlplane ~ ➜ k describe deploy frontend-deployment
Name: frontend-deployment
Namespace: default
CreationTimestamp: Tue, 21 Jan 2025 08:01:30 +0000
Labels: <none>
Annotations: deployment.kubernetes.io/revision: 1
Selector: name=busybox-pod
Replicas: 4 desired | 4 updated | 4 total | 0 available | 4 unavailable
StrategyType: RollingUpdate
MinReadySeconds: 0
RollingUpdateStrategy: 25% max unavailable, 25% max surge
Pod Template:
Labels: name=busybox-pod
Containers:
busybox-container:
Image: busybox888
Port: <none>
Host Port: <none>
Command:
sh
-c
echo Hello Kubernetes! && sleep 3600
Environment: <none>
Mounts: <none>
Volumes: <none>
Node-Selectors: <none>
Tolerations: <none>
Conditions:
Type Status Reason
---- ------ ------
Available False MinimumReplicasUnavailable
Progressing True ReplicaSetUpdated
OldReplicaSets: <none>
NewReplicaSet: frontend-deployment-649fb4c7c (4/4 replicas created)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal ScalingReplicaSet 3m55s deployment-controller Scaled up replica set frontend-deployment-649fb4c7c to 4
controlplane ~ ➜ k get pods -l busybox-pod
No resources found in default namespace.
controlplane ~ ➜ k get pods
NAME READY STATUS RESTARTS AGE
frontend-deployment-649fb4c7c-fp7nh 0/1 ImagePullBackOff 0 5m15s
frontend-deployment-649fb4c7c-k77p9 0/1 ImagePullBackOff 0 5m15s
frontend-deployment-649fb4c7c-mjmnf 0/1 ImagePullBackOff 0 5m15s
frontend-deployment-649fb4c7c-tqr8w 0/1 ImagePullBackOff 0 5m15s
controlplane ~ ➜ k describe pod frontend-deployment-649fb4c7c-fp7nh
Name: frontend-deployment-649fb4c7c-fp7nh
Namespace: default
Priority: 0
Service Account: default
Node: controlplane/192.168.187.116
Start Time: Tue, 21 Jan 2025 08:01:30 +0000
Labels: name=busybox-pod
pod-template-hash=649fb4c7c
Annotations: <none>
Status: Pending
IP: 10.42.0.12
IPs:
IP: 10.42.0.12
Controlled By: ReplicaSet/frontend-deployment-649fb4c7c
Containers:
busybox-container:
Container ID:
Image: busybox888
Image ID:
Port: <none>
Host Port: <none>
Command:
sh
-c
echo Hello Kubernetes! && sleep 3600
State: Waiting
Reason: ImagePullBackOff
Ready: False
Restart Count: 0
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-hl5dr (ro)
Conditions:
Type Status
PodReadyToStartContainers True
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
kube-api-access-hl5dr:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 5m57s default-scheduler Successfully assigned default/frontend-deployment-649fb4c7c-fp7nh to controlplane
Normal Pulling 4m28s (x4 over 5m56s) kubelet Pulling image "busybox888"
Warning Failed 4m28s (x4 over 5m56s) kubelet Failed to pull image "busybox888": failed to pull and unpack image "docker.io/library/busybox888:latest": failed to resolve reference "docker.io/library/busybox888:latest": pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed
Warning Failed 4m28s (x4 over 5m56s) kubelet Error: ErrImagePull
Warning Failed 4m2s (x6 over 5m55s) kubelet Error: ImagePullBackOff
Normal BackOff 49s (x20 over 5m55s) kubelet Back-off pulling image "busybox888"
controlplane ~ ➜ ls
deployment-definition-1.yaml sample.yaml
controlplane ~ ➜ k get deploy
NAME READY UP-TO-DATE AVAILABLE AGE
frontend-deployment 0/4 4 0 7m32s
controlplane ~ ➜ k describe frontend-deployment
error: the server doesn't have a resource type "frontend-deployment"
controlplane ~ ✖ k describe deploy frontend-deployment
Name: frontend-deployment
Namespace: default
CreationTimestamp: Tue, 21 Jan 2025 08:01:30 +0000
Labels: <none>
Annotations: deployment.kubernetes.io/revision: 1
Selector: name=busybox-pod
Replicas: 4 desired | 4 updated | 4 total | 0 available | 4 unavailable
StrategyType: RollingUpdate
MinReadySeconds: 0
RollingUpdateStrategy: 25% max unavailable, 25% max surge
Pod Template:
Labels: name=busybox-pod
Containers:
busybox-container:
Image: busybox888
Port: <none>
Host Port: <none>
Command:
sh
-c
echo Hello Kubernetes! && sleep 3600
Environment: <none>
Mounts: <none>
Volumes: <none>
Node-Selectors: <none>
Tolerations: <none>
Conditions:
Type Status Reason
---- ------ ------
Available False MinimumReplicasUnavailable
Progressing True ReplicaSetUpdated
OldReplicaSets: <none>
NewReplicaSet: frontend-deployment-649fb4c7c (4/4 replicas created)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal ScalingReplicaSet 7m47s deployment-controller Scaled up replica set frontend-deployment-649fb4c7c to 4
controlplane ~ ➜ vi deployment-definition-1.yaml
controlplane ~ ➜ k apply -f deployment-definition-1.yaml
Error from server (BadRequest): error when creating "deployment-definition-1.yaml": deployment in version "v1" cannot be handled as a Deployment: no kind "deployment" is registered for version "apps/v1" in scheme "k8s.io/apimachinery@v1.31.0-k3s3/pkg/runtime/scheme.go:100"
controlplane ~ ✖ vi deployment-definition-1.yaml
controlplane ~ ➜ k apply -f deployment-definition-1.yaml
deployment.apps/deployment-1 created
controlplane ~ ➜ vi deploy.yaml
controlplane ~ ➜ k apply -f deploy.yaml
deployment.apps/httpd-frontend created
controlplane ~ ➜
Practice Test - Node Affinity
apiVersion: apps/v1
kind: Deployment
metadata:
name: blue
labels:
app: nginx
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: color
operator: In
values:
- blue
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
tolerations:
- key: "color"
operator: "Equal"
value: "blue"
effect: "NoSchedule"
Practice Test - Resource Limits
apiVersion: v1
kind: Pod
metadata:
name: elephant
namespace: default
spec:
containers:
- name: memory-demo-ctr
image: polinux/stress
resources:
requests:
memory: "20Mi"
limits:
memory: "20Mi"
command: ["stress"]
args: ["--vm", "1", "--vm-bytes", "150M", "--vm-hang", "1"]
Practice Test - Static PODs
First, let's identify the node in which the pod called static-greenbox is created. To do this, run:
root@controlplane:~# kubectl get pods --all-namespaces -o wide | grep static-greenbox
default static-greenbox-node01 1/1 Running 0 19s 10.244.1.2 node01 <none> <none>
root@controlplane:~#
From the result of this command, we can see that the pod is running on node01.
Next, SSH to node01 and identify the path configured for static pods in this node.
Important: The path need not be /etc/kubernetes/manifests. Make sure to check the path configured in the kubelet configuration file.
root@controlplane:~# ssh node01
root@node01:~# ps -ef | grep /usr/bin/kubelet
root 4147 1 0 14:05 ? 00:00:00 /usr/bin/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/containerd/containerd.sock --pod-infra-container-image=registry.k8s.io/pause:3.9
root 4773 4733 0 14:05 pts/0 00:00:00 grep /usr/bin/kubelet
root@node01:~# grep -i staticpod /var/lib/kubelet/config.yaml
staticPodPath: /etc/just-to-mess-with-you
root@node01:~#
Here the staticPodPath is /etc/just-to-mess-with-you
Navigate to this directory and delete the YAML file:
root@node01:/etc/just-to-mess-with-you# ls
greenbox.yaml
root@node01:/etc/just-to-mess-with-you# rm -rf greenbox.yaml
root@node01:/etc/just-to-mess-with-you#
Exit out of node01 using CTRL + D or type exit. You should return to the controlplane node. Check if the static-greenbox pod has been deleted:
root@controlplane:~# kubectl get pods --all-namespaces -o wide | grep static-greenbox
root@controlplane:~#
Multiple Schedulers
Kubernetes의 기본 스케줄러 이미지는 Kubernetes 버전에 따라 결정됩니다. 디폴트 스케줄러의 이미지를 확인하려면 다음 단계를 따를 수 있습니다.
1. 기본 스케줄러 파드 확인
기본 스케줄러의 이미지를 확인하려면, kube-scheduler 파드를 살펴보아야 합니다. 기본적으로 kube-system 네임스페이스에 배치되어 있습니다.
bash
복사
편집
kubectl get pods -n kube-system | grep kube-scheduler
출력 예:
sql
복사
편집
kube-scheduler-controlplane 1/1 Running 0 10m
2. 기본 스케줄러 이미지 확인
기본 스케줄러의 이미지를 확인하려면 다음 명령어를 실행합니다.
bash
복사
편집
kubectl describe pod kube-scheduler-controlplane -n kube-system | grep Image
출력 예:
bash
복사
편집
Image: k8s.gcr.io/kube-scheduler:v1.25.0
여기서, **이미지 이름은 k8s.gcr.io/kube-scheduler:<version>**입니다.
728x90
'쿠버네티스' 카테고리의 다른 글
[쿠버네티스] CKA - Mock Exam2 정리 (0) | 2025.01.18 |
---|---|
[쿠버네티스] CKA - Mock Exam 정리 (0) | 2025.01.17 |
[쿠버네티스] pod 심화 과정 - 멀티 컨테이너 생성, 사이드카 파드 생성 (0) | 2024.12.28 |
Kubernetes RBAC(Role-Based Access Control) 정리 (0) | 2024.12.22 |
[쿠버네티스] 아키텍쳐 정리 및 간단한 실습 정리 (4) | 2024.12.15 |