Audit de configuration Kubernetes

Table of content

Minimal files to asks

Master and Worker node

  • Output of these commands
    find /etc -ls
    ps -ef
    iptables -L
    
  • Every files in /etc/kubernetes

API

Retrieve these information in YAML format :

  • Namespace
  • ClusterRole
  • ClusterRoleBindings
  • Role (for every namespaces)
  • RoleBindings (for every namespaces)
  • Group
  • Pod (for every namespaces)
  • ServiceAccount (for every namespaces)
  • PodSecurityPolicy
  • Network Policies depending on the CNI (Container Network Interface) used
  • Depending on the Kubernetes authentication mode : the related configuration if it is managed by a Kubernetes component.

Résumé des commandes

These command are here to help during the extraction. All commands are not compiled here.

Moreover, some parameters and absolute path may differ depending on the environment.

stat -c %a /etc/kubernetes/manifests/kube-apiserver.yaml
stat -c %U:%G /etc/kubernetes/manifests/kube-apiserver.yaml
stat -c %a /etc/kubernetes/manifests/kube-controller-manager.yaml
stat -c %U:%G /etc/kubernetes/manifests/kube-controller-manager.yaml
stat -c %a /etc/kubernetes/manifests/kube-scheduler.yaml
stat -c %U:%G /etc/kubernetes/manifests/kube-scheduler.yaml
stat -c %a /etc/kubernetes/manifests/etcd.yaml
stat -c %U:%G /etc/kubernetes/manifests/etcd.yaml
stat -c %a <path/to/cni/files>
stat -c %U:%G <path/to/cni/files>
stat -c %a /var/lib/etcd
stat -c %U:%G /var/lib/etcd
stat -c %a /etc/kubernetes/admin.conf
stat -c %U:%G /etc/kubernetes/admin.conf
stat -c %a /etc/kubernetes/scheduler.conf
stat -c %U:%G /etc/kubernetes/scheduler.conf
stat -c %a /etc/kubernetes/controller-manager.conf
stat -c %U:%G /etc/kubernetes/controller-manager.conf
ls -laR /etc/kubernetes/pki/
ls -laR /etc/kubernetes/pki/*.crt
ls -laR /etc/kubernetes/pki/*.key
ps -ef | grep kube-apiserver
ps -ef | grep kube-controller-manager
ps -ef | grep kube-scheduler
ps -ef | grep etcd
ps -ef | grep apiserver
stat -c %a /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
stat -c %a <path><filename>
stat -c %U:%G <path><filename>
stat -c %a /etc/kubernetes/kubelet.conf
stat -c %U %G /etc/kubernetes/kubelet.conf
stat -c %a <filename>
stat -c %U:%G <filename>
stat -c %a /var/lib/kubelet/config.yaml
ps -ef | grep kubelet
ps -ef | grep kubelet 
kubectl get clusterrolebindings -o=customcolumns=NAME:.metadata.name,ROLE:.roleRef.name,SUBJECT:.subjects[*].name
kubectl get roles --all-namespaces -o yaml
kubectl get clusterroles -o yaml
kubectl get psp
kubectl get psp <name> -o=jsonpath='{.spec.privileged}'
kubectl get psp <name> -o=jsonpath='{.spec.hostPID}'
kubectl get psp <name> -o=jsonpath='{.spec.hostIPC}'
kubectl get psp <name> -o=jsonpath='{.spec.hostNetwork}'
kubectl get psp <name> -o=jsonpath='{.spec.allowPrivilegeEscalation}'
kubectl get psp <name> -o=jsonpath='{.spec.runAsUser.rule}'
kubectl get psp <name> -o=jsonpath='{.spec.requiredDropCapabilities}'
kubectl --all-namespaces get networkpolicy
kubectl get all -o jsonpath='{range .items[?(@..secretKeyRef)]} {.kind} {.metadata.name} {"\n"}{end}' -A
kubectl get namespaces
kubectl get all

Descriptif des tests

These tests are here to help during the extract analysis. Tests and verifications must be adapted depending on the environment and the context.

This tests are extracted from the Center for Internet Security Kurbenetes Benchmark

Moreover, some parameters and absolute path may differ depending on the environment.

Test 0

  • test: Run the below command (based on the file location on your system) on the master node.
  • command: stat -c %a /etc/kubernetes/manifests/kube-apiserver.yaml
  • verif: Verify that the permissions are 644 or more restrictive.

Test 1

  • test: Run the below command (based on the file location on your system) on the master node.
  • command: stat -c %U:%G /etc/kubernetes/manifests/kube-apiserver.yaml
  • verif: Verify that the ownership is set to root:root.

Test 2

  • test: Run the below command (based on the file location on your system) on the master node.
  • command: stat -c %a /etc/kubernetes/manifests/kube-controller-manager.yaml
  • verif: Verify that the permissions are 644 or more restrictive.

Test 3

  • test: Run the below command (based on the file location on your system) on the master node.
  • command: stat -c %U:%G /etc/kubernetes/manifests/kube-controller-manager.yaml
  • verif: Verify that the ownership is set to root:root.

Test 4

  • test: Run the below command (based on the file location on your system) on the master node.
  • command: stat -c %a /etc/kubernetes/manifests/kube-scheduler.yaml
  • verif: Verify that the permissions are 644 or more restrictive.

Test 5

  • test: Run the below command (based on the file location on your system) on the master node.
  • command: stat -c %U:%G /etc/kubernetes/manifests/kube-scheduler.yaml
  • verif: Verify that the ownership is set to root:root.

Test 6

  • test: Run the below command (based on the file location on your system) on the master node.
  • command: stat -c %a /etc/kubernetes/manifests/etcd.yaml
  • verif: Verify that the permissions are 644 or more restrictive.

Test 7

  • test: Run the below command (based on the file location on your system) on the master node.
  • command: stat -c %U:%G /etc/kubernetes/manifests/etcd.yaml
  • verif: Verify that the ownership is set to root:root.

Test 8

  • test: Run the below command (based on the file location on your system) on the master node.
  • command: stat -c %a
  • verif: Verify that the permissions are 644 or more restrictive.

Test 9

  • test: Run the below command (based on the file location on your system) on the master node.
  • command: stat -c %U:%G
  • verif: Verify that the ownership is set to root:root.

Test 10

  • test: On the etcd server node, get the etcd data directory, passed as an argument --data-dir, from the below command: ps -ef | grep etcd. Run the below command (based on the etcd data directory found above).
  • command: stat -c %a /var/lib/etcd
  • verif: Verify that the permissions are 700 or more restrictive.

Test 11

  • test: On the etcd server node, get the etcd data directory, passed as an argument --data-dir, from the below command: ps -ef | grep etcd. Run the below command (based on the etcd data directory found above).
  • command: stat -c %U:%G /var/lib/etcd
  • verif: Verify that the ownership is set to etcd:etcd.

Test 12

  • test: Run the following command (based on the file location on your system) on the master node.
  • command: stat -c %a /etc/kubernetes/admin.conf
  • verif: Verify that the permissions are 644 or more restrictive.

Test 13

  • test: Run the below command (based on the file location on your system) on the master node.
  • command: stat -c %U:%G /etc/kubernetes/admin.conf
  • verif: Verify that the ownership is set to root:root.

Test 14

  • test: Run the following command (based on the file location on your system) on the master node.
  • command: stat -c %a /etc/kubernetes/scheduler.conf
  • verif: Verify that the permissions are 644 or more restrictive.

Test 15

  • test: Run the below command (based on the file location on your system) on the master node.
  • command: stat -c %U:%G /etc/kubernetes/scheduler.conf
  • verif: Verify that the ownership is set to root:root.

Test 16

  • test: Run the following command (based on the file location on your system) on the master node.
  • command: stat -c %a /etc/kubernetes/controller-manager.conf
  • verif: Verify that the permissions are 644 or more restrictive.

Test 17

  • test: Run the below command (based on the file location on your system) on the master node.
  • command: stat -c %U:%G /etc/kubernetes/controller-manager.conf
  • verif: Verify that the ownership is set to root:root.

Test 18

  • test: Run the below command (based on the file location on your system) on the master node.
  • command: ls -laR /etc/kubernetes/pki/
  • verif: Verify that the ownership of all files and directories in this hierarchy is set to root:root.

Test 19

  • test: Run the below command (based on the file location on your system) on the master node.
  • command: ls -laR /etc/kubernetes/pki/*.crt
  • verif: Verify that the permissions are 644 or more restrictive.

Test 20

  • test: Run the below command (based on the file location on your system) on the master node.
  • command: ls -laR /etc/kubernetes/pki/*.key
  • verif: Verify that the permissions are 600.

Test 21

  • test: Run the following command on the master node:
  • command: ps -ef | grep kube-apiserver
  • verif: Verify that the --anonymous-auth argument is set to false.

Test 22

  • test: Run the following command on the master node:
  • command: ps -ef | grep kube-apiserver
  • verif: Verify that the --token-auth-file argument does not exist.

Test 23

  • test: Run the following command on the master node:
  • command: ps -ef | grep kube-apiserver
  • verif: Verify that the --kubelet-https argument either does not exist or is set to true.

Test 24

  • test: Run the following command on the master node:
  • command: ps -ef | grep kube-apiserver
  • verif: Verify that the --kubelet-client-certificate and --kubelet-client-key arguments exist and they are set as appropriate.

Test 25

  • test: Run the following command on the master node:
  • command: ps -ef | grep kube-apiserver
  • verif: Verify that the --kubelet-certificate-authority argument exists and is set as appropriate.

Test 26

  • test: Run the following command on the master node:
  • command: ps -ef | grep kube-apiserver
  • verif: Verify that the --authorization-mode argument exists and is not set to AlwaysAllow.

Test 27

  • test: Run the following command on the master node:
  • command: ps -ef | grep kube-apiserver
  • verif: Verify that the --authorization-mode argument exists and is set to a value to include Node.

Test 28

  • test: Run the following command on the master node:
  • command: ps -ef | grep kube-apiserver
  • verif: Verify that the --authorization-mode argument exists and is set to a value to include RBAC.

Test 29

  • test: Run the following command on the master node:
  • command: ps -ef | grep kube-apiserver
  • verif: Verify that the --enable-admission-plugins argument is set to a value that includes EventRateLimit.

Test 30

  • test: Run the following command on the master node:
  • command: ps -ef | grep kube-apiserver
  • verif: Verify that if the --enable-admission-plugins argument is set, its value does not include AlwaysAdmit.

Test 31

  • test: Run the following command on the master node:
  • command: ps -ef | grep kube-apiserver
  • verif: Verify that the --enable-admission-plugins argument is set to a value that includes AlwaysPullImages.

Test 32

  • test: Run the following command on the master node:
  • command: ps -ef | grep kube-apiserver
  • verif: Verify that the --enable-admission-plugins argument is set to a value that includesSecurityContextDeny, if PodSecurityPolicy is not included.

Test 33

  • test: Run the following command on the master node:
  • command: ps -ef | grep kube-apiserver
  • verif: Verify that the --disable-admission-plugins argument is set to a value that does not includes ServiceAccount.

Test 34

  • test: Run the following command on the master node:
  • command: ps -ef | grep kube-apiserver
  • verif: Verify that the --disable-admission-plugins argument is set to a value that does not include NamespaceLifecycle.

Test 35

  • test: Run the following command on the master node:
  • command: ps -ef | grep kube-apiserver
  • verif: Verify that the --enable-admission-plugins argument is set to a value that includes PodSecurityPolicy.

Test 36

  • test: Run the following command on the master node:
  • command: ps -ef | grep kube-apiserver
  • verif: Verify that the --enable-admission-plugins argument is set to a value that includes NodeRestriction.

Test 37

  • test: Run the following command on the master node:
  • command: ps -ef | grep kube-apiserver
  • verif: Verify that the --insecure-bind-address argument does not exist.

Test 38

  • test: Run the following command on the master node:
  • command: ps -ef | grep kube-apiserver
  • verif: Verify that the --insecure-port argument is set to 0.

Test 39

  • test: Run the following command on the master node:
  • command: ps -ef | grep kube-apiserver
  • verif: Verify that the --secure-port argument is either not set or is set to an integer value between 1 and 65535.

Test 40

  • test: Run the following command on the master node:
  • command: ps -ef | grep kube-apiserver
  • verif: Verify that the --profiling argument is set to false.

Test 41

  • test: Run the following command on the master node:
  • command: ps -ef | grep kube-apiserver
  • verif: Verify that the --audit-log-path argument is set as appropriate.

Test 42

  • test: Run the following command on the master node:
  • command: ps -ef | grep kube-apiserver
  • verif: Verify that the --audit-log-maxage argument is set to 30 or as appropriate.

Test 43

  • test: Run the following command on the master node:
  • command: ps -ef | grep kube-apiserver
  • verif: Verify that the --audit-log-maxbackup argument is set to 10 or as appropriate.

Test 44

  • test: Run the following command on the master node:
  • command: ps -ef | grep kube-apiserver
  • verif: Verify that the --audit-log-maxsize argument is set to 100 or as appropriate.

Test 45

  • test: Run the following command on the master node:
  • command: ps -ef | grep kube-apiserver
  • verif: Verify that the --request-timeout argument is either not set or set to an appropriate value.

Test 46

  • test: Run the following command on the master node:
  • command: ps -ef | grep kube-apiserver
  • verif: Verify that if the --service-account-lookup argument exists it is set to true.

Test 47

  • test: Run the following command on the master node:
  • command: ps -ef | grep kube-apiserver
  • verif: Verify that the --service-account-key-file argument exists and is set as appropriate.

Test 48

  • test: Run the following command on the master node:
  • command: ps -ef | grep kube-apiserver
  • verif: Verify that the --etcd-certfile and --etcd-keyfile arguments exist and they are set as appropriate.

Test 49

  • test: Run the following command on the master node:
  • command: ps -ef | grep kube-apiserver
  • verif: Verify that the --tls-cert-file and --tls-private-key-file arguments exist and they are set as appropriate.

Test 50

  • test: Run the following command on the master node:
  • command: ps -ef | grep kube-apiserver
  • verif: Verify that the --client-ca-file argument exists and it is set as appropriate.

Test 51

  • test: Run the following command on the master node:
  • command: ps -ef | grep kube-apiserver
  • verif: Verify that the --etcd-cafile argument exists and it is set as appropriate.

Test 52

  • test: Run the following command on the master node:
  • command: ps -ef | grep kube-apiserver
  • verif: Verify that the --encryption-provider-config argument is set to a EncryptionConfig file. Additionally, ensure that the EncryptionConfig file has all the desired resources covered especially any secrets.

Test 53

  • test: Run the following command on the master node:
  • command: ps -ef | grep kube-apiserver
  • verif: Get the EncryptionConfig file set for --encryption-provider-config argument. Verify that aescbc, kms or secretbox is set as the encryption provider for all the desired resources.

Test 54

  • test: Run the following command on the master node:
  • command: ps -ef | grep kube-apiserver
  • verif: Verify that the --tls-cipher-suites argument is set as outlined in the remediation procedure below.

Test 55

  • test: Run the following command on the master node:
  • command: ps -ef | grep kube-controller-manager
  • verif: Verify that the --terminated-pod-gc-threshold argument is set as appropriate.

Test 56

  • test: Run the following command on the master node:
  • command: ps -ef | grep kube-controller-manager
  • verif: Verify that the --profiling argument is set to false.

Test 57

  • test: Run the following command on the master node:
  • command: ps -ef | grep kube-controller-manager
  • verif: Verify that the --use-service-account-credentials argument is set to true.

Test 58

  • test: Run the following command on the master node:
  • command: ps -ef | grep kube-controller-manager
  • verif: Verify that the --service-account-private-key-file argument is set as appropriate.

Test 59

  • test: Run the following command on the master node:
  • command: ps -ef | grep kube-controller-manager
  • verif: Verify that the --root-ca-file argument exists and is set to a certificate bundle file containing the root certificate for the API server's serving certificate.

Test 60

  • test: Run the following command on the master node:
  • command: ps -ef | grep kube-controller-manager
  • verif: Verify that RotateKubeletServerCertificate argument exists and is set to true.

Test 61

  • test: Run the following command on the master node:
  • command: ps -ef | grep kube-controller-manager
  • verif: Verify that the --bind-address argument is set to 127.0.0.1

Test 62

  • test: Run the following command on the master node:
  • command: ps -ef | grep kube-scheduler
  • verif: Verify that the --profiling argument is set to false.

Test 63

  • test: Run the following command on the master node:
  • command: ps -ef | grep kube-scheduler
  • verif: Verify that the --bind-address argument is set to 127.0.0.1

Test 64

  • test: Run the following command on the etcd server node
  • command: ps -ef | grep etcd
  • verif: Verify that the --cert-file and the --key-file arguments are set as appropriate.

Test 65

  • test: Run the following command on the etcd server node:
  • command: ps -ef | grep etcd
  • verif: Verify that the --client-cert-auth argument is set to true.

Test 66

  • test: Run the following command on the etcd server node:
  • command: ps -ef | grep etcd
  • verif: Verify that if the --auto-tls argument exists, it is not set to true.

Test 67

  • test: Run the following command on the etcd server node:
  • command: ps -ef | grep etcd
  • verif: Verify that the --peer-cert-file and --peer-key-file arguments are set as appropriate. Note: This recommendation is applicable only for etcd clusters. If you are using only one etcd server in your environment then this recommendation is not applicable.

Test 68

  • test: Run the following command on the etcd server node:
  • command: ps -ef | grep etcd
  • verif: Verify that the --peer-client-cert-auth argument is set to true. Note: This recommendation is applicable only for etcd clusters. If you are using only one etcd server in your environment then this recommendation is not applicable.

Test 69

  • test: Run the following command on the etcd server node:
  • command: ps -ef | grep etcd
  • verif: Verify that if the --peer-auto-tls argument exists, it is not set to true. Note: This recommendation is applicable only for etcd clusters. If you are using only one etcd server in your environment then this recommendation is not applicable.

Test 70

  • test: Review the CA used by the etcd environment and ensure that it does not match the CA certificate file used for the management of the overall Kubernetes cluster. Run the following command on the master node: ps -ef | grep etcd. Note the file referenced by the --trusted-ca-file argument. Run the following command on the master node:
  • command: ps -ef | grep apiserver
  • verif: Verify that the file referenced by the --client-ca-file for apiserver is different from the --trusted-ca-file used by etcd.

Test 71

  • test: Review user access to the cluster and ensure that users are not making use of Kubernetes client certificate authentication.
  • command:
  • verif:

Test 72

  • test: Run the following command on one of the cluster master nodes:
  • command: ps -ef | grep kube-apiserver
  • verif: Verify that the --audit-policy-file is set. Review the contents of the file specified and ensure that it contains a valid audit policy.

Test 73

  • test: Review the audit policy provided for the cluster and ensure that it covers at least the following areas : / Access to Secrets managed by the cluster. / Modification of pod and deployment objects. / Use of pods or exec, pods or portforward, pods or proxy and services or proxy. For most requests, minimally logging at the Metadata level is recommended (the most basic level of logging).
  • command: # NO COMMAND
  • verif: Care should be taken to only log Metadata for requests to Secrets, ConfigMaps, and TokenReviews, in order to avoid the risk of logging sensitive data.

Test 74

  • test: Run the below command (based on the file location on your system) on the each worker node.
  • command: stat -c %a /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
  • verif: Verify that the permissions are 644 or more restrictive.

Test 75

  • test: Run the below command (based on the file location on your system) on the each worker node.
  • command: stat -c %a /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
  • verif: Verify that the ownership is set to root:root.

Test 76

  • test: Find the kubeconfig file being used by kube-proxy by running the following command: ps -ef | grep kube-proxy. If kube-proxy is running, get the kubeconfig file location from the --kubeconfig parameter. Run the below command (based on the file location on your system) on the each worker node.
  • command: stat -c %a
  • verif: Verify that a file is specified and it exists with permissions are 644 or more restrictive.

Test 77

  • test: Find the kubeconfig file being used by kube-proxy by running the following command: ps -ef | grep kube-proxy. If kube-proxy is running, get the kubeconfig file location from the --kubeconfig parameter. Run the below command (based on the file location on your system) on the each worker node.
  • command: stat -c %U:%G
  • verif: Verify that the ownership is set to root:root.

Test 78

  • test: Run the below command (based on the file location on your system) on the each worker node. For example,
  • command: stat -c %a /etc/kubernetes/kubelet.conf
  • verif: Verify that the ownership is set to root:root.Verify that the permissions are 644 or more restrictive.

Test 79

  • test: Run the below command (based on the file location on your system) on the each worker node.
  • command: stat -c %U %G /etc/kubernetes/kubelet.conf
  • verif: Verify that the ownership is set to root:root.

Test 80

  • test: Run the following command: ps -ef | grep kubelet. Find the file specified by the --client-ca-file argument. Run the following command:
  • command: stat -c %a
  • verif: Verify that the permissions are 644 or more restrictive.

Test 81

  • test: Run the following command: ps -ef | grep kubelet. Find the file specified by the --client-ca-file argument. Run the following command:
  • command: stat -c %U:%G
  • verif: Verify that the ownership is set to root:root.

Test 82

  • test: Run the below command (based on the file location on your system) on the each worker node.
  • command: stat -c %a /var/lib/kubelet/config.yaml
  • verif: Verify that the permissions are 644 or more restrictive.

Test 83

  • test: Run the below command (based on the file location on your system) on the each worker node.
  • command: stat -c %a /var/lib/kubelet/config.yaml
  • verif: Verify that the ownership is set to root:root.

Test 84

  • test: If using a Kubelet configuration file, check that there is an entry for authentication: 'anonymous: enabled' set to false. Run the following command on each node:
  • command: ps -ef | grep kubelet
  • verif: Verify that the --anonymous-auth argument is set to false. This executable argument may be omitted, provided there is a corresponding entry set to false in the Kubelet config file.

Test 85

  • test: Run the following command on each node:
  • command: ps -ef | grep kubelet
  • verif: If the --authorization-mode argument is present check that it is not set to AlwaysAllow. If it is not present check that there is a Kubelet config file specified by --config, and that file sets authorization: mode to something other than AlwaysAllow. It is also possible to review the running configuration of a Kubelet via the /configz endpoint on the Kubelet API port (typically 10250/TCP). Accessing these with appropriate credentials will provide details of the Kubelet's configuration.

Test 86

  • test: Run the following command on each node:
  • command: ps -ef | grep kubelet
  • verif: Verify that the --client-ca-file argument exists and is set to the location of the client certificate authority file. If the --client-ca-file argument is not present, check that there is a Kubelet config file specified by --config, and that the file sets authentication: x509: clientCAFile to the location of the client certificate authority file.

Test 87

  • test: Run the following command on each node:
  • command: ps -ef | grep kubelet
  • verif: Verify that the --read-only-port argument exists and is set to 0. If the --read-only-port argument is not present, check that there is a Kubelet config file specified by --config. Check that if there is a readOnlyPort entry in the file, it is set to 0.

Test 88

  • test: Run the following command on each node:
  • command: ps -ef | grep kubelet
  • verif: Verify that the --streaming-connection-idle-timeout argument is not set to 0. If the argument is not present, and there is a Kubelet config file specified by --config, check that it does not set streamingConnectionIdleTimeout to 0.

Test 89

  • test: Run the following command on each node:
  • command: ps -ef | grep kubelet
  • verif: Verify that the --protect-kernel-defaults argument is set to true. If the --protect-kernel-defaults argument is not present, check that there is a Kubelet config file specified by --config, and that the file sets protectKernelDefaults to true.

Test 90

  • test: Run the following command on each node:
  • command: ps -ef | grep kubelet
  • verif: Verify that if the --make-iptables-util-chains argument exists then it is set to true. If the --make-iptables-util-chains argument does not exist, and there is a Kubelet config file specified by --config, verify that the file does not set makeIPTablesUtilChains to false.

Test 91

  • test: Run the following command on each node:
  • command: ps -ef | grep kubelet
  • verif: Verify that --hostname-override argument does not exist. Note This setting is not configurable via the Kubelet config file.

Test 92

  • test: Run the following command on each node:
  • command: ps -ef | grep kubelet
  • verif: Review the value set for the --event-qps argument and determine whether this has been set to an appropriate level for the cluster. The value of 0 can be used to ensure that all events are captured. If the --event-qps argument does not exist, check that there is a Kubelet config file specified by --config and review the value in this location.

Test 93

  • test: Run the following command on each node:
  • command: ps -ef | grep kubelet
  • verif: Verify that the --tls-cert-file and --tls-private-key-file arguments exist and they are set as appropriate. If these arguments are not present, check that there is a Kubelet config specified by --config and that it contains appropriate settings for tlsCertFile and tlsPrivateKeyFile.

Test 94

  • test: Run the following command on each node:
  • command: ps -ef | grep kubelet
  • verif: Verify that the --rotate-certificates argument is not present, or is set to true. If the --rotate-certificates argument is not present, verify that if there is a Kubelet config file specified by --config, that file does not contain rotateCertificates: false.

Test 95

  • test: Ignore this check if serverTLSBootstrap is true in the kubelet config file or if the --rotateserver-certificates parameter is set on kubelet. Run the following command on each node:
  • command: ps -ef | grep kubelet
  • verif: Verify that RotateKubeletServerCertificate argument exists and is set to true.

Test 96

  • test: The set of cryptographic ciphers currently considered secure is the following:,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_GCM_SHA256. Run the following command on each node:
  • command: ps -ef | grep kubelet
  • verif: If the --tls-cipher-suites argument is present, ensure it only contains values included in this set. If it is not present check that there is a Kubelet config file specified by --config, and that file sets TLSCipherSuites: to only include values from this set.

Test 97

  • test: Obtain a list of the principals who have access to the cluster-admin role by reviewing the clusterrolebinding output for each role binding that has access to the cluster-admin role.
  • command: kubectl get clusterrolebindings -o=customcolumns=NAME:.metadata.name,ROLE:.roleRef.name,SUBJECT:.subjects[*].name
  • verif: Review each principal listed and ensure that cluster-admin privilege is required for it.

Test 98

  • test: Review the users who have get, list or watch access to secrets objects in the Kubernetes API.
  • command: # NO COMMAND
  • verif:

Test 99

  • test: Retrieve the roles defined across each namespaces in the cluster and review for wildcards
  • command: kubectl get roles --all-namespaces -o yaml
  • verif: Verify wildcard are not used

Test 100

  • test: Retrieve the cluster roles defined in the cluster and review for wildcards
  • command: kubectl get clusterroles -o yaml
  • verif: Verify wildcard are not used

Test 101

  • test: Review the users who have create access to pod objects in the Kubernetes API.
  • command: # NO COMMAND
  • verif:

Test 102

  • test: For each namespace in the cluster, review the rights assigned to the default service account
  • command: # NO COMMAND
  • verif: Ensure that it has no roles or cluster roles bound to it apart from the defaults. Additionally ensure that the automountServiceAccountToken: false setting is in place for each default service account.

Test 103

  • test: Review pod and service account objects in the cluster
  • command: # NO COMMAND
  • verif: Ensure that the option below is set, unless the resource explicitly requires this access.automountServiceAccountToken: false

Test 104

  • test: Review a list of all credentials which have access to the cluster
  • command: # NO COMMAND
  • verif: Ensure that the group system:masters is not used.

Test 105

  • test: Review the users who have access to cluster roles or roles which provide the impersonate, bind or escalate privileges.
  • command:
  • verif:

Test 106

  • test: Get the set of PSPs with the following command: kubectl get psp. For each PSP, check whether privileged is enabled:
  • command: kubectl get psp -o=jsonpath='{.spec.privileged}'
  • verif: Verify that there is at least one PSP which does not return true.

Test 107

  • test: Get the set of PSPs with the following command: kubectl get psp. For each PSP, check whether privileged is enabled:
  • command: kubectl get psp -o=jsonpath='{.spec.hostPID}'
  • verif: Verify that there is at least one PSP which does not return true.

Test 108

  • test: Get the set of PSPs with the following command: kubectl get psp. For each PSP, check whether privileged is enabled:
  • command: kubectl get psp -o=jsonpath='{.spec.hostIPC}'
  • verif: Verify that there is at least one PSP which does not return true.

Test 109

  • test: Get the set of PSPs with the following command: kubectl get psp. For each PSP, check whether privileged is enabled:
  • command: kubectl get psp -o=jsonpath='{.spec.hostNetwork}'
  • verif: Verify that there is at least one PSP which does not return true.

Test 110

  • test: Get the set of PSPs with the following command: kubectl get psp. For each PSP, check whether privileged is enabled:
  • command: kubectl get psp -o=jsonpath='{.spec.allowPrivilegeEscalation}'
  • verif: Verify that there is at least one PSP which does not return true.

Test 111

  • test: Get the set of PSPs with the following command: kubectl get psp. For each PSP, check whether running containers as root is enabled:
  • command: kubectl get psp -o=jsonpath='{.spec.runAsUser.rule}'
  • verif: Verify that there is at least one PSP which returns MustRunAsNonRoot or MustRunAs with the range of UIDs not including 0.

Test 112

  • test: Get the set of PSPs with the following command: kubectl get psp. For each PSP, check whether NET_RAW is disabled:
  • command: kubectl get psp -o=jsonpath='{.spec.requiredDropCapabilities}'
  • verif: Verify that there is at least one PSP which returns NET_RAW or ALL.

Test 113

  • test: Get the set of PSPs with the following command:
  • command: kubectl get psp
  • verif: Verify that there are no PSPs present which have allowedCapabilities set to anything other than an empty array.

Test 114

  • test: Get the set of PSPs with the following command: kubectl get psp. For each PSP, check whether capabilities have been forbidden:
  • command: kubectl get psp -o=jsonpath='{.spec.requiredDropCapabilities}'
  • verif:

Test 115

  • test: Review the documentation of CNI plugin in use by the cluster
  • command: # NO COMMAND
  • verif: Confirm that it supports Ingress and Egress network policies.

Test 116

  • test: Run the below command and review the NetworkPolicy objects created in the cluster.
  • command: kubectl --all-namespaces get networkpolicy
  • verif: Ensure that each namespace defined in the cluster has at least one Network Policy.

Test 117

  • test: Run the following command to find references to objects which use environment variables defined from secret
  • command: kubectl get all -o jsonpath='{range .items[?(@..secretKeyRef)]} {.kind} {.metadata.name} {"\n"}{end}' -A
  • verif:

Test 118

  • test: Review your secrets management implementation.
  • command: # NO COMMAND
  • verif:

Test 119

  • test: Review the pod definitions in your cluster
  • command: # NO COMMAND
  • verif: Verify that image provenance is configured as appropriate.

Test 120

  • test: Run the below command and review the namespaces created in the cluster.
  • command: kubectl get namespaces
  • verif: Ensure that these namespaces are the ones you need and are adequately administered as per your requirements.

Test 121

  • test: Review the pod definitions in your cluster.
  • command: # NO COMMAND
  • verif: Verify it create a line as follow: securityContext: seccompProfile: type: RuntimeDefault

Test 122

  • test: Review the pod definitions in your cluster
  • command: # NO COMMAND
  • verif: verify that you have security contexts defined as appropriate.

Test 123

  • test: Run this command to list objects in default namespace
  • command: kubectl get all
  • verif: The only entries there should be system managed resources such as the kubernetes service

results matching ""

    No results matching ""

    results matching ""

      No results matching ""