CKA試験無料問題集「Linux Foundation Certified Kubernetes Administrator (CKA) Program 認定」
For this item, you will have to ssh to the nodes ik8s-master-0 and ik8s-node-0 and complete all tasks on these nodes. Ensure that you return to the base node (hostname: node-1) when you have completed this item.
Context
As an administrator of a small development team, you have been asked to set up a Kubernetes cluster to test the viability of a new application.
Task
You must use kubeadm to perform this task. Any kubeadm invocations will require the use of the --ignore- preflight-errors=all option.
* Configure the node ik8s-master-O as a master node. .
* Join the node ik8s-node-o to the cluster.
Context
As an administrator of a small development team, you have been asked to set up a Kubernetes cluster to test the viability of a new application.
Task
You must use kubeadm to perform this task. Any kubeadm invocations will require the use of the --ignore- preflight-errors=all option.
* Configure the node ik8s-master-O as a master node. .
* Join the node ik8s-node-o to the cluster.
正解:
You must use the kubeadm configuration file located at /etc/kubeadm.conf when initializing your cluster.
You may use any CNI plugin to complete this task, but if you don't have your favourite CNI plugin's manifest URL at hand, Calico is one popular option: https://docs.projectcalico.org/v3.14/manifests/calico.yaml Docker is already installed on both nodes and apt has been configured so that you can install the required tools.
You may use any CNI plugin to complete this task, but if you don't have your favourite CNI plugin's manifest URL at hand, Calico is one popular option: https://docs.projectcalico.org/v3.14/manifests/calico.yaml Docker is already installed on both nodes and apt has been configured so that you can install the required tools.
Create a snapshot of the etcd instance running at https://127.0.0.1:2379, saving the snapshot to the file path
/srv/data/etcd-snapshot.db.
The following TLS certificates/key are supplied for connecting to the server with etcdctl:
* CA certificate: /opt/KUCM00302/ca.crt
* Client certificate: /opt/KUCM00302/etcd-client.crt
* Client key: Topt/KUCM00302/etcd-client.key
/srv/data/etcd-snapshot.db.
The following TLS certificates/key are supplied for connecting to the server with etcdctl:
* CA certificate: /opt/KUCM00302/ca.crt
* Client certificate: /opt/KUCM00302/etcd-client.crt
* Client key: Topt/KUCM00302/etcd-client.key
正解:

List pod logs named "frontend" and search for the pattern "started" and write it to a file "/opt/error-logs" See the solution below.
正解:
Kubectl logs frontend | grep -i "started" > /opt/error-logs
You must connect to the correct host.
Failure to do so may result in a zero score.
[candidate@base] $ ssh Cka000047
Task
A MariaDB Deployment in the mariadb namespace has been deleted by mistake. Your task is to restore the Deployment ensuring data persistence. Follow these steps:
Create a PersistentVolumeClaim (PVC ) named mariadb in the mariadb namespace with the following specifications:
Access mode ReadWriteOnce
Storage 250Mi
You must use the existing retained PersistentVolume (PV ).
Failure to do so will result in a reduced score.
There is only one existing PersistentVolume .
Edit the MariaDB Deployment file located at ~/mariadb-deployment.yaml to use PVC you created in the previous step.
Apply the updated Deployment file to the cluster.
Ensure the MariaDB Deployment is running and stable.
Failure to do so may result in a zero score.
[candidate@base] $ ssh Cka000047
Task
A MariaDB Deployment in the mariadb namespace has been deleted by mistake. Your task is to restore the Deployment ensuring data persistence. Follow these steps:
Create a PersistentVolumeClaim (PVC ) named mariadb in the mariadb namespace with the following specifications:
Access mode ReadWriteOnce
Storage 250Mi
You must use the existing retained PersistentVolume (PV ).
Failure to do so will result in a reduced score.
There is only one existing PersistentVolume .
Edit the MariaDB Deployment file located at ~/mariadb-deployment.yaml to use PVC you created in the previous step.
Apply the updated Deployment file to the cluster.
Ensure the MariaDB Deployment is running and stable.
正解:
Task Overview
You're restoring a MariaDB deployment in the mariadb namespace with persistent data.
# Tasks:
* SSH into cka000047
* Create a PVC named mariadb:
* Namespace: mariadb
* Access mode: ReadWriteOnce
* Storage: 250Mi
* Use the existing retained PV (there's only one)
* Edit ~/mariadb-deployment.yaml to use the PVC
* Apply the deployment
* Verify MariaDB is running and stable
Step-by-Step Solution
1## SSH into the correct host
ssh cka000047
## Required - skipping = zero score
2## Inspect the existing PersistentVolume
kubectl get pv
# Identify the only existing PV, e.g.:
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS
mariadb-pv 250Mi RWO Retain Available <none> manual
Ensure the status is Available, and it is not already bound to a claim.
3## Create the PVC to bind the retained PV
Create a file mariadb-pvc.yaml:
cat <<EOF > mariadb-pvc.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mariadb
namespace: mariadb
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 250Mi
volumeName: mariadb-pv # Match the PV name exactly
EOF
Apply the PVC:
kubectl apply -f mariadb-pvc.yaml
# This binds the PVC to the retained PV.
4## Edit the MariaDB Deployment YAML
Open the file:
nano ~/mariadb-deployment.yaml
Look under the spec.template.spec.containers.volumeMounts and spec.template.spec.volumes sections and update them like so:
Add this under the container:
yaml
CopyEdit
volumeMounts:
- name: mariadb-storage
mountPath: /var/lib/mysql
And under the pod spec:
volumes:
- name: mariadb-storage
persistentVolumeClaim:
claimName: mariadb
# These lines mount the PVC at the MariaDB data directory.
5## Apply the updated Deployment
kubectl apply -f ~/mariadb-deployment.yaml
6## Verify the Deployment is running and stable
kubectl get pods -n mariadb
kubectl describe pod -n mariadb <mariadb-pod-name>
# Ensure the pod is in Running state and volume is mounted.
# Final Command Summary
ssh cka000047
kubectl get pv # Find the retained PV
# Create PVC
cat <<EOF > mariadb-pvc.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mariadb
namespace: mariadb
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 250Mi
volumeName: mariadb-pv
EOF
kubectl apply -f mariadb-pvc.yaml
# Edit Deployment
nano ~/mariadb-deployment.yaml
# Add volumeMount and volume to use the PVC as described
kubectl apply -f ~/mariadb-deployment.yaml
kubectl get pods -n mariadb
You're restoring a MariaDB deployment in the mariadb namespace with persistent data.
# Tasks:
* SSH into cka000047
* Create a PVC named mariadb:
* Namespace: mariadb
* Access mode: ReadWriteOnce
* Storage: 250Mi
* Use the existing retained PV (there's only one)
* Edit ~/mariadb-deployment.yaml to use the PVC
* Apply the deployment
* Verify MariaDB is running and stable
Step-by-Step Solution
1## SSH into the correct host
ssh cka000047
## Required - skipping = zero score
2## Inspect the existing PersistentVolume
kubectl get pv
# Identify the only existing PV, e.g.:
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS
mariadb-pv 250Mi RWO Retain Available <none> manual
Ensure the status is Available, and it is not already bound to a claim.
3## Create the PVC to bind the retained PV
Create a file mariadb-pvc.yaml:
cat <<EOF > mariadb-pvc.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mariadb
namespace: mariadb
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 250Mi
volumeName: mariadb-pv # Match the PV name exactly
EOF
Apply the PVC:
kubectl apply -f mariadb-pvc.yaml
# This binds the PVC to the retained PV.
4## Edit the MariaDB Deployment YAML
Open the file:
nano ~/mariadb-deployment.yaml
Look under the spec.template.spec.containers.volumeMounts and spec.template.spec.volumes sections and update them like so:
Add this under the container:
yaml
CopyEdit
volumeMounts:
- name: mariadb-storage
mountPath: /var/lib/mysql
And under the pod spec:
volumes:
- name: mariadb-storage
persistentVolumeClaim:
claimName: mariadb
# These lines mount the PVC at the MariaDB data directory.
5## Apply the updated Deployment
kubectl apply -f ~/mariadb-deployment.yaml
6## Verify the Deployment is running and stable
kubectl get pods -n mariadb
kubectl describe pod -n mariadb <mariadb-pod-name>
# Ensure the pod is in Running state and volume is mounted.
# Final Command Summary
ssh cka000047
kubectl get pv # Find the retained PV
# Create PVC
cat <<EOF > mariadb-pvc.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mariadb
namespace: mariadb
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 250Mi
volumeName: mariadb-pv
EOF
kubectl apply -f mariadb-pvc.yaml
# Edit Deployment
nano ~/mariadb-deployment.yaml
# Add volumeMount and volume to use the PVC as described
kubectl apply -f ~/mariadb-deployment.yaml
kubectl get pods -n mariadb
Score: 4%

Task
Create a pod named kucc8 with a single app container for each of the following images running inside (there may be between 1 and 4 images specified): nginx + redis + memcached .

Task
Create a pod named kucc8 with a single app container for each of the following images running inside (there may be between 1 and 4 images specified): nginx + redis + memcached .
正解:
Solution:
kubectl run kucc8 --image=nginx --dry-run -o yaml > kucc8.yaml
# vi kucc8.yaml
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
name: kucc8
spec:
containers:
- image: nginx
name: nginx
- image: redis
name: redis
- image: memcached
name: memcached
- image: consul
name: consul
#
kubectl create -f kucc8.yaml
#12.07
kubectl run kucc8 --image=nginx --dry-run -o yaml > kucc8.yaml
# vi kucc8.yaml
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
name: kucc8
spec:
containers:
- image: nginx
name: nginx
- image: redis
name: redis
- image: memcached
name: memcached
- image: consul
name: consul
#
kubectl create -f kucc8.yaml
#12.07
Score: 7%

Task
First, create a snapshot of the existing etcd instance running at https://127.0.0.1:2379, saving the snapshot to
/srv/data/etcd-snapshot.db.

Next, restore an existing, previous snapshot located at /var/lib/backup/etcd-snapshot-previo us.db


Task
First, create a snapshot of the existing etcd instance running at https://127.0.0.1:2379, saving the snapshot to
/srv/data/etcd-snapshot.db.

Next, restore an existing, previous snapshot located at /var/lib/backup/etcd-snapshot-previo us.db

正解:
Solution:
#backup
ETCDCTL_API=3 etcdctl --endpoints="https://127.0.0.1:2379" --cacert=/opt/KUIN000601/ca.crt --cert=/opt
/KUIN000601/etcd-client.crt --key=/opt/KUIN000601/etcd-client.key snapshot save /etc/data/etcd-snapshot.db
#restore
ETCDCTL_API=3 etcdctl --endpoints="https://127.0.0.1:2379" --cacert=/opt/KUIN000601/ca.crt --cert=/opt
/KUIN000601/etcd-client.crt --key=/opt/KUIN000601/etcd-client.key snapshot restore /var/lib/backup/etcd- snapshot-previoys.db
#backup
ETCDCTL_API=3 etcdctl --endpoints="https://127.0.0.1:2379" --cacert=/opt/KUIN000601/ca.crt --cert=/opt
/KUIN000601/etcd-client.crt --key=/opt/KUIN000601/etcd-client.key snapshot save /etc/data/etcd-snapshot.db
#restore
ETCDCTL_API=3 etcdctl --endpoints="https://127.0.0.1:2379" --cacert=/opt/KUIN000601/ca.crt --cert=/opt
/KUIN000601/etcd-client.crt --key=/opt/KUIN000601/etcd-client.key snapshot restore /var/lib/backup/etcd- snapshot-previoys.db
From the pod label name=cpu-utilizer, find pods running high CPU workloads and write the name of the pod consuming most CPU to the file /opt/KUTR00102/KUTR00102.txt (which already exists).
正解:


Perform the following tasks:
* Add an init container to hungry-bear (which has been defined in spec file /opt/KUCC00108/pod-spec- KUCC00108.yaml)
* The init container should create an empty file named/workdir/calm.txt
* If /workdir/calm.txt is not detected, the pod should exit
* Once the spec file has been updated with the init container definition, the pod should be created.
* Add an init container to hungry-bear (which has been defined in spec file /opt/KUCC00108/pod-spec- KUCC00108.yaml)
* The init container should create an empty file named/workdir/calm.txt
* If /workdir/calm.txt is not detected, the pod should exit
* Once the spec file has been updated with the init container definition, the pod should be created.
正解:


