CKA試験無料問題集「Linux Foundation Certified Kubernetes Administrator (CKA) Program 認定」
You must connect to the correct host.
Failure to do so may result in a zero score.
[candidate@base] $ ssh Cka000060
Task
Install Argo CD in the cluster by performing the following tasks:
Add the official Argo CD Helm repository with the name argo
The Argo CD CRDs have already been pre-installed in the cluster
Generate a template of the Argo CD Helm chart version 7.7.3 for the argocd namespace and save it to ~/argo- helm.yaml . Configure the chart to not install CRDs.
Failure to do so may result in a zero score.
[candidate@base] $ ssh Cka000060
Task
Install Argo CD in the cluster by performing the following tasks:
Add the official Argo CD Helm repository with the name argo
The Argo CD CRDs have already been pre-installed in the cluster
Generate a template of the Argo CD Helm chart version 7.7.3 for the argocd namespace and save it to ~/argo- helm.yaml . Configure the chart to not install CRDs.
正解:
Task Summary
* SSH into cka000060
* Add the Argo CD Helm repo named argo
* Generate a manifest (~/argo-helm.yaml) for Argo CD version 7.7.3
* Target namespace: argocd
* Do not install CRDs
* Just generate, don't install
# Step-by-Step Solution
1## SSH into the correct host
ssh cka000060
## Required - skipping this = zero score
2## Add the Argo CD Helm repository
helm repo add argo https://argoproj.github.io/argo-helm
helm repo update
# This adds the official Argo Helm chart source.
3## Generate Argo CD Helm chart template (version 7.7.3)
Use the helm template command to generate a manifest and write it to ~/argo-helm.yaml.
helm template argocd argo/argo-cd \
--version 7.7.3 \
--namespace argocd \
--set crds.install=false \
> ~/argo-helm.yaml
* argocd # Release name (can be anything; here it's same as the namespace)
* --set crds.install=false # Disables CRD installation
* > ~/argo-helm.yaml # Save to required file
# 4## Verify the generated file (optional but smart)
head ~/argo-helm.yaml
Check that it contains valid Kubernetes YAML and does not include CRDs.
# Final Command Summary
ssh cka000060
helm repo add argo https://argoproj.github.io/argo-helm
helm repo update
helm template argocd argo/argo-cd \
--version 7.7.3 \
--namespace argocd \
--set crds.install=false \
> ~/argo-helm.yaml
head ~/argo-helm.yaml # Optional verification
* SSH into cka000060
* Add the Argo CD Helm repo named argo
* Generate a manifest (~/argo-helm.yaml) for Argo CD version 7.7.3
* Target namespace: argocd
* Do not install CRDs
* Just generate, don't install
# Step-by-Step Solution
1## SSH into the correct host
ssh cka000060
## Required - skipping this = zero score
2## Add the Argo CD Helm repository
helm repo add argo https://argoproj.github.io/argo-helm
helm repo update
# This adds the official Argo Helm chart source.
3## Generate Argo CD Helm chart template (version 7.7.3)
Use the helm template command to generate a manifest and write it to ~/argo-helm.yaml.
helm template argocd argo/argo-cd \
--version 7.7.3 \
--namespace argocd \
--set crds.install=false \
> ~/argo-helm.yaml
* argocd # Release name (can be anything; here it's same as the namespace)
* --set crds.install=false # Disables CRD installation
* > ~/argo-helm.yaml # Save to required file
# 4## Verify the generated file (optional but smart)
head ~/argo-helm.yaml
Check that it contains valid Kubernetes YAML and does not include CRDs.
# Final Command Summary
ssh cka000060
helm repo add argo https://argoproj.github.io/argo-helm
helm repo update
helm template argocd argo/argo-cd \
--version 7.7.3 \
--namespace argocd \
--set crds.install=false \
> ~/argo-helm.yaml
head ~/argo-helm.yaml # Optional verification
Ensure a single instance of pod nginx is running on each node of the Kubernetes cluster where nginx also represents the Image name which has to be used. Do not override any taints currently in place.
Use DaemonSet to complete this task and use ds-kusc00201 as DaemonSet name.
Use DaemonSet to complete this task and use ds-kusc00201 as DaemonSet name.
正解:




Get list of all the pods showing name and namespace with a jsonpath expression.
正解:
kubectl get pods -o=jsonpath="{.items[*]['metadata.name'
, 'metadata.namespace']}"
, 'metadata.namespace']}"
Score: 4%

Task
Scale the deployment presentation to 6 pods.

Task
Scale the deployment presentation to 6 pods.
正解:
Solution:
kubectl get deployment
kubectl scale deployment.apps/presentation --replicas=6
kubectl get deployment
kubectl scale deployment.apps/presentation --replicas=6
Score:7%

Context
An existing Pod needs to be integrated into the Kubernetes built-in logging architecture (e. g. kubectl logs).
Adding a streaming sidecar container is a good and common way to accomplish this requirement.
Task
Add a sidecar container named sidecar, using the busybox Image, to the existing Pod big-corp-app. The new sidecar container has to run the following command:
/bin/sh -c tail -n+1 -f /va r/log/big-corp-app.log
Use a Volume, mounted at /var/log, to make the log file big-corp-app.log available to the sidecar container.


Context
An existing Pod needs to be integrated into the Kubernetes built-in logging architecture (e. g. kubectl logs).
Adding a streaming sidecar container is a good and common way to accomplish this requirement.
Task
Add a sidecar container named sidecar, using the busybox Image, to the existing Pod big-corp-app. The new sidecar container has to run the following command:
/bin/sh -c tail -n+1 -f /va r/log/big-corp-app.log
Use a Volume, mounted at /var/log, to make the log file big-corp-app.log available to the sidecar container.

正解:
Solution:
#
kubectl get pod big-corp-app -o yaml
#
apiVersion: v1
kind: Pod
metadata:
name: big-corp-app
spec:
containers:
- name: big-corp-app
image: busybox
args:
- /bin/sh
- -c
- >
i=0;
while true;
do
echo "$(date) INFO $i" >> /var/log/big-corp-app.log;
i=$((i+1));
sleep 1;
done
volumeMounts:
- name: logs
mountPath: /var/log
- name: count-log-1
image: busybox
args: [/bin/sh, -c, 'tail -n+1 -f /var/log/big-corp-app.log']
volumeMounts:
- name: logs
mountPath: /var/log
volumes:
- name: logs
emptyDir: {
}
#
kubectl logs big-corp-app -c count-log-1
#
kubectl get pod big-corp-app -o yaml
#
apiVersion: v1
kind: Pod
metadata:
name: big-corp-app
spec:
containers:
- name: big-corp-app
image: busybox
args:
- /bin/sh
- -c
- >
i=0;
while true;
do
echo "$(date) INFO $i" >> /var/log/big-corp-app.log;
i=$((i+1));
sleep 1;
done
volumeMounts:
- name: logs
mountPath: /var/log
- name: count-log-1
image: busybox
args: [/bin/sh, -c, 'tail -n+1 -f /var/log/big-corp-app.log']
volumeMounts:
- name: logs
mountPath: /var/log
volumes:
- name: logs
emptyDir: {
}
#
kubectl logs big-corp-app -c count-log-1
Score: 4%

Task
Create a persistent volume with name app-data , of capacity 1Gi and access mode ReadOnlyMany. The type of volume is hostPath and its location is /srv/app-data .

Task
Create a persistent volume with name app-data , of capacity 1Gi and access mode ReadOnlyMany. The type of volume is hostPath and its location is /srv/app-data .
正解:
Solution:
#vi pv.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
name: app-config
spec:
capacity:
storage: 1Gi
accessModes:
- ReadOnlyMany
hostPath:
path: /srv/app-config
#
kubectl create -f pv.yaml
#vi pv.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
name: app-config
spec:
capacity:
storage: 1Gi
accessModes:
- ReadOnlyMany
hostPath:
path: /srv/app-config
#
kubectl create -f pv.yaml
Create an nginx pod and list the pod with different levels of verbosity See the solution below.
正解:
// create a pod
kubectl run nginx --image=nginx --restart=Never --port=80
// List the pod with different verbosity
kubectl get po nginx --v=7
kubectl get po nginx --v=8
kubectl get po nginx --v=9
kubectl run nginx --image=nginx --restart=Never --port=80
// List the pod with different verbosity
kubectl get po nginx --v=7
kubectl get po nginx --v=8
kubectl get po nginx --v=9
A Kubernetes worker node, named wk8s-node-0 is in state NotReady. Investigate why this is the case, and perform any appropriate steps to bring the node to a Ready state, ensuring that any changes are made permanent.
You can ssh to the failed node using:
[student@node-1] $ | ssh Wk8s-node-0
You can assume elevated privileges on the node with the following command:
[student@w8ks-node-0] $ | sudo -i
You can ssh to the failed node using:
[student@node-1] $ | ssh Wk8s-node-0
You can assume elevated privileges on the node with the following command:
[student@w8ks-node-0] $ | sudo -i
正解:


