Skip to main content

Kube Hunter

kube-hunter logo

License Apache-2.0GitHub release (latest SemVer)OWASP Lab ProjectArtifact HUBGitHub Repo starsMastodon Follower

What is kube-hunter?

kube-hunter hunts for security weaknesses in Kubernetes clusters. The tool was developed to increase awareness and visibility for security issues in Kubernetes environments. You should NOT run kube-hunter on a Kubernetes cluster that you don't own!

To learn more about the kube-hunter scanner itself visit kube-hunter GitHub or kube-hunter Website.

Deployment

The kube-hunter chart can be deployed via helm:

# Install HelmChart (use -n to configure another namespace)
helm upgrade --install kube-hunter secureCodeBox/kube-hunter

Scanner Configuration

The following security scan configuration example are based on the kube-hunter Documentation, please take a look at the original documentation for more configuration examples.

  • To specify remote machines for hunting, select option 1 or use the --remote option. Example: kube-hunter --remote some.node.com
  • To specify interface scanning, you can use the --interface option (this will scan all the machine's network interfaces). Example: kube-hunter --interface
  • To specify a specific CIDR to scan, use the --cidr option. Example: kube-hunter --cidr 192.168.0.0/24

Requirements

Kubernetes: >=v1.11.0-0

Values

KeyTypeDefaultDescription
cascadingRules.enabledboolfalseEnables or disables the installation of the default cascading rules for this scanner
imagePullSecretslist[]Define imagePullSecrets when a private registry is used (see: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/)
parser.affinityobject{}Optional affinity settings that control how the parser job is scheduled (see: https://kubernetes.io/docs/tasks/configure-pod-container/assign-pods-nodes-using-node-affinity/)
parser.envlist[]Optional environment variables mapped into each parseJob (see: https://kubernetes.io/docs/tasks/inject-data-application/define-environment-variable-container/)
parser.image.pullPolicystring"IfNotPresent"Image pull policy. One of Always, Never, IfNotPresent. Defaults to Always if :latest tag is specified, or IfNotPresent otherwise. More info: https://kubernetes.io/docs/concepts/containers/images#updating-images
parser.image.repositorystring"docker.io/securecodebox/parser-kube-hunter"Parser image repository
parser.image.tagstringdefaults to the charts versionParser image tag
parser.nodeSelectorobject{}Optional nodeSelector settings that control how the scanner job is scheduled (see: https://kubernetes.io/docs/tasks/configure-pod-container/assign-pods-nodes/)
parser.resourcesobject{ requests: { cpu: "200m", memory: "100Mi" }, limits: { cpu: "400m", memory: "200Mi" } }Optional resources lets you control resource limits and requests for the parser container. See https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/
parser.scopeLimiterAliasesobject{}Optional finding aliases to be used in the scopeLimiter.
parser.tolerationslist[]Optional tolerations settings that control how the parser job is scheduled (see: https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/)
parser.ttlSecondsAfterFinishedstringnilseconds after which the Kubernetes job for the parser will be deleted. Requires the Kubernetes TTLAfterFinished controller: https://kubernetes.io/docs/concepts/workloads/controllers/ttlafterfinished/
scanner.activeDeadlineSecondsstringnilThere are situations where you want to fail a scan Job after some amount of time. To do so, set activeDeadlineSeconds to define an active deadline (in seconds) when considering a scan Job as failed. (see: https://kubernetes.io/docs/concepts/workloads/controllers/job/#job-termination-and-cleanup)
scanner.affinityobject{}Optional affinity settings that control how the scanner job is scheduled (see: https://kubernetes.io/docs/tasks/configure-pod-container/assign-pods-nodes-using-node-affinity/)
scanner.backoffLimitint3There are situations where you want to fail a scan Job after some amount of retries due to a logical error in configuration etc. To do so, set backoffLimit to specify the number of retries before considering a scan Job as failed. (see: https://kubernetes.io/docs/concepts/workloads/controllers/job/#pod-backoff-failure-policy)
scanner.envlist[]Optional environment variables mapped into each scanJob (see: https://kubernetes.io/docs/tasks/inject-data-application/define-environment-variable-container/)
scanner.extraContainerslist[]Optional additional Containers started with each scanJob (see: https://kubernetes.io/docs/concepts/workloads/pods/init-containers/)
scanner.extraVolumeMountslist[]Optional VolumeMounts mapped into each scanJob (see: https://kubernetes.io/docs/concepts/storage/volumes/)
scanner.extraVolumeslist[]Optional Volumes mapped into each scanJob (see: https://kubernetes.io/docs/concepts/storage/volumes/)
scanner.image.pullPolicystring"IfNotPresent"Image pull policy. One of Always, Never, IfNotPresent. Defaults to Always if :latest tag is specified, or IfNotPresent otherwise. More info: https://kubernetes.io/docs/concepts/containers/images#updating-images
scanner.image.repositorystring"docker.io/securecodebox/scanner-kube-hunter"Container Image to run the scan
scanner.image.tagstringnildefaults to the charts appVersion
scanner.nameAppendstringnilappend a string to the default scantype name.
scanner.nodeSelectorobject{}Optional nodeSelector settings that control how the scanner job is scheduled (see: https://kubernetes.io/docs/tasks/configure-pod-container/assign-pods-nodes/)
scanner.podSecurityContextobject{}Optional securityContext set on scanner pod (see: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/)
scanner.resourcesobject{}CPU/memory resource requests/limits (see: https://kubernetes.io/docs/tasks/configure-pod-container/assign-memory-resource/, https://kubernetes.io/docs/tasks/configure-pod-container/assign-cpu-resource/)
scanner.securityContextobject{"allowPrivilegeEscalation":false,"capabilities":{"drop":["all"]},"privileged":false,"readOnlyRootFilesystem":true,"runAsNonRoot":false}Optional securityContext set on scanner container (see: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/)
scanner.securityContext.allowPrivilegeEscalationboolfalseEnsure that users privileges cannot be escalated
scanner.securityContext.capabilities.drop[0]string"all"This drops all linux privileges from the container.
scanner.securityContext.privilegedboolfalseEnsures that the scanner container is not run in privileged mode
scanner.securityContext.readOnlyRootFilesystembooltruePrevents write access to the containers file system
scanner.securityContext.runAsNonRootboolfalseEnforces that the scanner image is run as a non root user
scanner.suspendboolfalseif set to true the scan job will be suspended after creation. You can then resume the job using kubectl resume <jobname> or using a job scheduler like kueue
scanner.tolerationslist[]Optional tolerations settings that control how the scanner job is scheduled (see: https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/)
scanner.ttlSecondsAfterFinishedstringnilseconds after which the Kubernetes job for the scanner will be deleted. Requires the Kubernetes TTLAfterFinished controller: https://kubernetes.io/docs/concepts/workloads/controllers/ttlafterfinished/

License

License

Code of secureCodeBox is licensed under the Apache License 2.0.

CPU architectures

The scanner is currently supported for these CPU architectures:

  • linux/amd64

Examples

in-cluster

# SPDX-FileCopyrightText: the secureCodeBox authors
#
# SPDX-License-Identifier: Apache-2.0

apiVersion: "execution.securecodebox.io/v1"
kind: Scan
metadata:
name: "kube-hunter-in-cluster"
spec:
scanType: "kube-hunter"
parameters:
- "--pod"