Kubeaudit
What is Kubeaudit?
The kubeaudit
ScanType is being deprecated in the secureCodeBox since it will no longer be maintained as described in the [GitHub repository](kubeaudit GitHub). The scanner will be removed in the upcoming v5 release.
Kubeaudit finds security misconfigurations in you Kubernetes Resources and gives tips on how to resolve these.
Kubeaudit comes with a large lists of "auditors" which test various aspects, like the SecurityContext of pods. You can find the complete list of auditors here.
To learn more about the kubeaudit itself visit kubeaudit GitHub.
Deployment
The kubeaudit chart can be deployed via helm:
# Install HelmChart (use -n to configure another namespace)
helm upgrade --install kubeaudit oci://ghcr.io/securecodebox/helm/kubeaudit
Scanner Configuration
The following security scan configuration example are based on the [kube-hunter Documentation], please take a look at the original documentation for more configuration examples.
- To specify remote machines for hunting, select option 1 or use the --remote option. Example:
kube-hunter --remote some.node.com
- To specify interface scanning, you can use the --interface option (this will scan all the machine's network interfaces). Example:
kube-hunter --interface
- To specify a specific CIDR to scan, use the --cidr option. Example:
kube-hunter --cidr 192.168.0.0/24
Requirements
Kubernetes: >=v1.11.0-0
Values
Key | Type | Default | Description |
---|---|---|---|
cascadingRules.enabled | bool | false | Enables or disables the installation of the default cascading rules for this scanner |
imagePullSecrets | list | [] | Define imagePullSecrets when a private registry is used (see: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/) |
kubeauditScope | string | "namespace" | Automatically sets up rbac roles for kubeaudit to access the resources it scans. Can be either "cluster" (ClusterRole) or "namespace" (Role) |
parser.affinity | object | {} | Optional affinity settings that control how the parser job is scheduled (see: https://kubernetes.io/docs/tasks/configure-pod-container/assign-pods-nodes-using-node-affinity/) |
parser.env | list | [] | Optional environment variables mapped into each parseJob (see: https://kubernetes.io/docs/tasks/inject-data-application/define-environment-variable-container/) |
parser.image.pullPolicy | string | "IfNotPresent" | Image pull policy. One of Always, Never, IfNotPresent. Defaults to Always if :latest tag is specified, or IfNotPresent otherwise. More info: https://kubernetes.io/docs/concepts/containers/images#updating-images |
parser.image.repository | string | "docker.io/securecodebox/parser-kubeaudit" | Parser image repository |
parser.image.tag | string | defaults to the charts version | Parser image tag |
parser.nodeSelector | object | {} | Optional nodeSelector settings that control how the scanner job is scheduled (see: https://kubernetes.io/docs/tasks/configure-pod-container/assign-pods-nodes/) |
parser.resources | object | { requests: { cpu: "200m", memory: "100Mi" }, limits: { cpu: "400m", memory: "200Mi" } } | Optional resources lets you control resource limits and requests for the parser container. See https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ |
parser.scopeLimiterAliases | object | {} | Optional finding aliases to be used in the scopeLimiter. |
parser.tolerations | list | [] | Optional tolerations settings that control how the parser job is scheduled (see: https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/) |
parser.ttlSecondsAfterFinished | string | nil | seconds after which the Kubernetes job for the parser will be deleted. Requires the Kubernetes TTLAfterFinished controller: https://kubernetes.io/docs/concepts/workloads/controllers/ttlafterfinished/ |
scanner.activeDeadlineSeconds | string | nil | There are situations where you want to fail a scan Job after some amount of time. To do so, set activeDeadlineSeconds to define an active deadline (in seconds) when considering a scan Job as failed. (see: https://kubernetes.io/docs/concepts/workloads/controllers/job/#job-termination-and-cleanup) |
scanner.affinity | object | {} | Optional affinity settings that control how the scanner job is scheduled (see: https://kubernetes.io/docs/tasks/configure-pod-container/assign-pods-nodes-using-node-affinity/) |
scanner.backoffLimit | int | 3 | There are situations where you want to fail a scan Job after some amount of retries due to a logical error in configuration etc. To do so, set backoffLimit to specify the number of retries before considering a scan Job as failed. (see: https://kubernetes.io/docs/concepts/workloads/controllers/job/#pod-backoff-failure-policy) |
scanner.env | list | [] | Optional environment variables mapped into each scanJob (see: https://kubernetes.io/docs/tasks/inject-data-application/define-environment-variable-container/) |
scanner.extraContainers | list | [] | Optional additional Containers started with each scanJob (see: https://kubernetes.io/docs/concepts/workloads/pods/init-containers/) |
scanner.extraVolumeMounts | list | [] | Optional VolumeMounts mapped into each scanJob (see: https://kubernetes.io/docs/concepts/storage/volumes/) |
scanner.extraVolumes | list | [] | Optional Volumes mapped into each scanJob (see: https://kubernetes.io/docs/concepts/storage/volumes/) |
scanner.image.pullPolicy | string | "IfNotPresent" | Image pull policy. One of Always, Never, IfNotPresent. Defaults to Always if :latest tag is specified, or IfNotPresent otherwise. More info: https://kubernetes.io/docs/concepts/containers/images#updating-images |
scanner.image.repository | string | "docker.io/securecodebox/scanner-kubeaudit" | Container Image to run the scan |
scanner.image.tag | string | nil | defaults to the charts appVersion |
scanner.nameAppend | string | nil | append a string to the default scantype name. |
scanner.nodeSelector | object | {} | Optional nodeSelector settings that control how the scanner job is scheduled (see: https://kubernetes.io/docs/tasks/configure-pod-container/assign-pods-nodes/) |
scanner.podSecurityContext | object | {} | Optional securityContext set on scanner pod (see: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/) |
scanner.resources | object | {} | CPU/memory resource requests/limits (see: https://kubernetes.io/docs/tasks/configure-pod-container/assign-memory-resource/, https://kubernetes.io/docs/tasks/configure-pod-container/assign-cpu-resource/) |
scanner.securityContext | object | {"allowPrivilegeEscalation":false,"capabilities":{"drop":["all"]},"privileged":false,"readOnlyRootFilesystem":true,"runAsNonRoot":true} | Optional securityContext set on scanner container (see: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/) |
scanner.securityContext.allowPrivilegeEscalation | bool | false | Ensure that users privileges cannot be escalated |
scanner.securityContext.capabilities.drop[0] | string | "all" | This drops all linux privileges from the container. |
scanner.securityContext.privileged | bool | false | Ensures that the scanner container is not run in privileged mode |
scanner.securityContext.readOnlyRootFilesystem | bool | true | Prevents write access to the containers file system |
scanner.securityContext.runAsNonRoot | bool | true | Enforces that the scanner image is run as a non root user |
scanner.suspend | bool | false | if set to true the scan job will be suspended after creation. You can then resume the job using kubectl resume <jobname> or using a job scheduler like kueue |
scanner.tolerations | list | [] | Optional tolerations settings that control how the scanner job is scheduled (see: https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/) |
scanner.ttlSecondsAfterFinished | string | nil | seconds after which the Kubernetes job for the scanner will be deleted. Requires the Kubernetes TTLAfterFinished controller: https://kubernetes.io/docs/concepts/workloads/controllers/ttlafterfinished/ |
License
Code of secureCodeBox is licensed under the Apache License 2.0.
CPU architectures
The scanner is currently supported for these CPU architectures:
- linux/amd64
Examples
juice-shop
In this example we execute an kubeaudit scan against the intentional vulnerable juice-shop
Initialize juice-shop in cluster
Before executing the scan, make sure to setup juice-shop
helm upgrade --install juice-shop oci://ghcr.io/securecodebox/helm/juice-shop --wait
After that you can execute the scan in this directory:
kubectl apply -f scan.yaml
Troubleshooting:
Make sure to install juice-shop in the same namespace as the scanner!
If you juice-shop runs in, e.g., the kubeaudit-tests
namespace, install the chart and run the scan there too
# Install HelmChart in kubeaudit-tests namespace
helm upgrade --install kubeaudit oci://ghcr.io/securecodebox/helm/kubeaudit -n kubeaudit-tests
# Run scan in kubeaudit-tests namespace
kubectl apply -f scan.yaml -n kubeaudit-tests
Also, you must adjust the namespace in the scan.yaml with the -n
flag.
Alternatively, you can set the scope of kubeaudit to cluster:
helm upgrade --install kubeaudit oci://ghcr.io/securecodebox/helm/kubeaudit -n kubeaudit-tests --set="kubeauditScope=cluster"
- Scan
# SPDX-FileCopyrightText: the secureCodeBox authors
#
# SPDX-License-Identifier: Apache-2.0
apiVersion: "execution.securecodebox.io/v1"
kind: Scan
metadata:
name: "kubeaudit-juiceshop"
spec:
scanType: "kubeaudit"
parameters:
- "-n"
- "default"