Best Practices
Apply multiple Policy Controller bundles
This section explains how to enable Policy Controller bundles.
For more detailed information about applying and using policy bundles, read the instructions for the bundle that you want to apply using the left navigation menu. For more information about policy bundles, see the Policy Controller bundles overview.
If you installed Policy Controller using the KOSMOS console, the Samsung Security Checklist bundle is installed by default, but you can enable more bundles.
Before you begin
Apply policy bundles
To apply one or more policy bundles on a cluster using the KOSMOS console, complete the following steps:
In the KOSMOS console, go to the Policy page under the Fleet section.
Under the Settings tab, in the cluster table, select Edit edit in the Edit configuration column.
In the Add/Edit policy bundles menu, ensure the template library is toggled on.
To enable all policy bundles, toggle Add all policy bundles on.
To enable individual policy bundles, toggle on each policy bundle that you want to enable.
Optional: To exempt a namespace from enforcement, expand the Show advanced settings menu. In the Exempt namespaces field, provide a list of valid namespaces.
[!TIP] Best practice: Exempt system namespaces to avoid errors in your environment. You can find the instructions to exempt namespaces and a list of common namespaces created by KOSMOS on the Exclude namespaces page .
- Select Save changes.
You can view additional information about your policy coverage and violations using the Policy Controller dashboard.
Troubleshooting
You can’t modify policy bundles that are installed directly by using the instructions on this page. If you’re having issues with a policy bundle and need to make edits, install the bundle by using one of the methods on the individual policy bundle’s page. These methods pull the policy bundle from a Git repository, which lets you make changes. For example, if you want to edit the CIS Kubernetes Benchmark 1.5, follow the instructions on Use CIS Kubernetes Benchmark v1.5.1 policy constraints instead of this page.
What’s next
- Learn more about applying individual constraints .
Use CIS Kubernetes benchmark v1.5.1 policy constraints
Policy Controller comes with a default library of constraint templates that can be used with the CIS bundle to audit the compliance of your cluster against the CIS Kubernetes Benchmark v1.5.1. This benchmark is a set of recommendations for configuring Kubernetes to support a strong security posture.
This page contains instructions for manually applying a policy bundle. Alternatively, you can apply policy bundles directly .
This page is for IT administrators and Operators who want to ensure that all resources running within the cloud platform meet organizational compliance requirements by providing and maintaining automation to audit or enforce.
This bundle of constraints addresses and enforces policies in the following domains:
- RBAC and service accounts
- Pod Security Policies
- Network policies and CNI
- Secrets management
- General policies
Note: This bundle has not been certified by CIS.
CIS Kubernetes v1.5.1 policy bundle constraints
| Constraint Name | Control Description | Control ID |
|---|---|---|
| cis-k8s-v1.5.1-no-secrets-as-env-vars | Prefer using Secrets as files over Secrets as environment variables | 5.4.1 |
| cis-k8s-v1.5.1-pods-require-security-context | Apply Security Context to your Pods and containers | 5.7.3 |
| cis-k8s-v1.5.1-prohibit-role-wildcard-access.yaml | Restricts the use of wildcards in Roles and ClusterRoles | 5.1.3 |
| cis-k8s-v1.5.1-psp-allow-privilege-escalation-container | Minimize the admission of containers with allowPrivilegeEscalation | 5.2.5 |
| cis-k8s-v1.5.1-psp-capabilities | Minimize the admission of containers with the NET_RAW capability Minimize the admission of containers with added capabilities Minimize the admission of containers with capabilities assigned | 5.2.7 5.2.8 5.2.9 |
| cis-k8s-v1.5.1-psp-host-namespace.yaml | Minimize the admission of containers wanting to share the host process ID namespace Minimize the admission of containers wanting to share the host IPC namespace | 5.2.2 5.2.3 |
| cis-k8s-v1.5.1-psp-host-network-ports | Minimize the admission of containers wanting to share the host network namespace | 5.2.4 |
| cis-k8s-v1.5.1-psp-pods-must-run-as-nonroot | Minimize the admission of root containers | 5.2.6 |
| cis-k8s-v1.5.1-psp-privileged-container.yaml | Minimize the admission of privileged containers | 5.2.1 |
| cis-k8s-v1.5.1-psp-seccomp-default | Ensure that the seccomp profile is set to docker/default in your Pod definitions | 5.7.2 |
| cis-k8s-v1.5.1-require-namespace-network-policies | Ensure that all namespaces have Network Policies defined | 5.3.2 |
| cis-k8s-v1.5.1-restrict-clusteradmin-rolebindings.yaml | Restricts the use of the cluster-admin role | 5.1.1 |
Before you begin
- Install Policy Controller on your cluster with the default library of constraint templates.
Audit CIS Kubernetes v1.5.1 policy bundle
Policy Controller lets you enforce policies for your Kubernetes cluster. To help test your workloads and their compliance with regard to the CIS policies outlined in the preceding table, you can deploy these constraints in “audit” mode to reveal violations and more importantly give yourself a chance to fix them before enforcing on your Kubernetes cluster.
You can apply these policies with spec.enforcementAction set to dryrun using kubectl.
- (Optional) Preview the policy constraints with kubectl:
kubectl kustomize https://github.com/GoogleCloudPlatform/gke-policy-library.git/bundles/cis-k8s-v1.5.1
- Apply the policy constraints with kubectl:
kubectl apply -k https://github.com/GoogleCloudPlatform/gke-policy-library.git/bundles/cis-k8s-v1.5.1
The output is similar to the following: Click to expand output
k8snoenvvarsecrets.constraints.gatekeeper.sh/cis-k8s-v1.5.1-no-secrets-as-env-vars created
k8spspallowprivilegeescalationcontainer.constraints.gatekeeper.sh/cis-k8s-v1.5.1-psp-allow-privilege-escalation created
k8spspallowedusers.constraints.gatekeeper.sh/cis-k8s-v1.5.1-psp-pods-must-run-as-nonroot created
k8spspcapabilities.constraints.gatekeeper.sh/cis-k8s-v1.5.1-psp-capabilities created
k8spsphostnamespace.constraints.gatekeeper.sh/cis-k8s-v1.5.1-psp-host-namespace created
k8spsphostnetworkingports.constraints.gatekeeper.sh/cis-k8s-v1.5.1-psp-host-network-ports created
k8spspprivilegedcontainer.constraints.gatekeeper.sh/cis-k8s-v1.5.1-psp-privileged-container created
k8spspseccomp.constraints.gatekeeper.sh/cis-k8s-v1.5.1-psp-seccomp-default created
k8spodsrequiresecuritycontext.constraints.gatekeeper.sh/cis-k8s-v1.5.1-pods-require-security-context created
k8sprohibitrolewildcardaccess.constraints.gatekeeper.sh/cis-k8s-v1.5.1-prohibit-role-wildcard-access created
k8srequirenamespacenetworkpolicies.constraints.gatekeeper.sh/cis-k8s-v1.5.1-require-namespace-network-policies created
k8srestrictrolebindings.constraints.gatekeeper.sh/cis-k8s-v1.5.1-restrict-clusteradmin-rolebindings created
- Verify that policy constraints have been installed and check if violations exist across the cluster:
kubectl get -k https://github.com/GoogleCloudPlatform/gke-policy-library.git/bundles/cis-k8s-v1.5.1
The output is similar to the following: Click to expand output
NAME ENFORCEMENT-ACTION TOTAL-VIOLATIONS
k8snoenvvarsecrets.constraints.gatekeeper.sh/cis-k8s-v1.5.1-no-secrets-as-env-vars dryrun 0
NAME ENFORCEMENT-ACTION TOTAL-VIOLATIONS
k8spspallowprivilegeescalationcontainer.constraints.gatekeeper.sh/cis-k8s-v1.5.1-psp-allow-privilege-escalation dryrun 0
NAME ENFORCEMENT-ACTION TOTAL-VIOLATIONS
k8spspallowedusers.constraints.gatekeeper.sh/cis-k8s-v1.5.1-psp-pods-must-run-as-nonroot dryrun 0
NAME ENFORCEMENT-ACTION TOTAL-VIOLATIONS
k8spspcapabilities.constraints.gatekeeper.sh/cis-k8s-v1.5.1-psp-capabilities dryrun 0
NAME ENFORCEMENT-ACTION TOTAL-VIOLATIONS
k8spsphostnamespace.constraints.gatekeeper.sh/cis-k8s-v1.5.1-psp-host-namespace dryrun 0
NAME ENFORCEMENT-ACTION TOTAL-VIOLATIONS
k8spsphostnetworkingports.constraints.gatekeeper.sh/cis-k8s-v1.5.1-psp-host-network-ports dryrun 0
NAME ENFORCEMENT-ACTION TOTAL-VIOLATIONS
k8spspprivilegedcontainer.constraints.gatekeeper.sh/cis-k8s-v1.5.1-psp-privileged-container dryrun 0
NAME ENFORCEMENT-ACTION TOTAL-VIOLATIONS
k8spspseccomp.constraints.gatekeeper.sh/cis-k8s-v1.5.1-psp-seccomp-default dryrun 0
NAME ENFORCEMENT-ACTION TOTAL-VIOLATIONS
k8spodsrequiresecuritycontext.constraints.gatekeeper.sh/cis-k8s-v1.5.1-pods-require-security-context dryrun 0
NAME ENFORCEMENT-ACTION TOTAL-VIOLATIONS
k8sprohibitrolewildcardaccess.constraints.gatekeeper.sh/cis-k8s-v1.5.1-prohibit-role-wildcard-access dryrun 0
NAME ENFORCEMENT-ACTION TOTAL-VIOLATIONS
k8srequirenamespacenetworkpolicies.constraints.gatekeeper.sh/cis-k8s-v1.5.1-require-namespace-network-policies dryrun 0
NAME ENFORCEMENT-ACTION TOTAL-VIOLATIONS
k8srestrictrolebindings.constraints.gatekeeper.sh/cis-k8s-v1.5.1-restrict-clusteradmin-rolebindings dryrun 0
View policy violations
Once the policy constraints are installed in audit mode, violations on the cluster can be viewed in the UI using the Policy Controller Dashboard.
You can also use kubectl to view violations on the cluster using the following command:
kubectl get constraint -l bundleName=cis-k8s-v1.5.1 -o json | jq -cC '.items[]| [.metadata.name,.status.totalViolations]'
If violations are present, a listing of the violation messages per constraint can be viewed with:
kubectl get constraint -l bundleName=cis-k8s-v1.5.1 -o json | jq -C '.items[]| select(.status.totalViolations>0)| [.metadata.name,.status.violations[]?]'
Change CIS Kubernetes v1.5.1 policy bundle enforcement action
Once you’ve reviewed policy violations on your cluster, you can consider changing the enforcement mode so the Admission Controller will either warn on or even deny block non-compliant resource from getting applied to the cluster.
[!WARNING] The deny enforcement action should be used with care as it can potentially block required changes resulting in interruption to critical workloads or the cluster. It is strongly recommended that only the
warnordryrunenforcement actions are used on clusters with production workloads, when testing new constraints, or performing migrations such as upgrading platforms. For more information about enforcement actions, see Auditing using constraints .
- Use kubectl to set the policies' enforcement action to
warn:
kubectl get constraint -l bundleName=cis-k8s-v1.5.1 -o name | xargs -I {} kubectl patch {} --type='json' -p='[{"op":"replace","path":"/spec/enforcementAction","value":"warn"}]'
- Verify that policy constraints enforcement action have been updated:
kubectl get constraint -l bundleName=cis-k8s-v1.5.1
Test policy enforcement
Create a non-compliant resource on the cluster using the following command:
cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: Pod
metadata:
namespace: default
name: wp-non-compliant
labels:
app: wordpress
spec:
containers:
- image: wordpress
name: wordpress
ports:
- containerPort: 80
name: wordpress
EOF
The admission controller should produce a warning listing out the policy violations that this resource violates, as shown in the following example:
Warning: [cis-k8s-v1.5.1-psp-pods-must-run-as-nonroot] Container wordpress is attempting to run without a required securityContext/runAsNonRoot or securityContext/runAsUser != 0
Warning: [cis-k8s-v1.5.1-psp-allow-privilege-escalation] Privilege escalation container is not allowed: wordpress
Warning: [cis-k8s-v1.5.1-psp-seccomp-default] Seccomp profile 'not configured' is not allowed for container 'wordpress'. Found at: no explicit profile found. Allowed profiles: {"RuntimeDefault", "docker/default", "runtime/default"}
Warning: [cis-k8s-v1.5.1-psp-capabilities] container <wordpress> is not dropping all required capabilities. Container must drop all of ["NET_RAW"] or "ALL"
Warning: [cis-k8s-v1.5.1-pods-require-security-context] securityContext must be defined for all Pod containers
pod/wp-non-compliant created
Remove CIS Kubernetes v1.5.1 policy bundle
If needed, the CIS K8s policy bundle can be removed from the cluster.
- Use kubectl to remove the policies:
kubectl delete constraint -l bundleName=cis-k8s-v1.5.1
Use NIST SP 800-190 policy constraints
Policy Controller comes with a default library of constraint templates that can be used with the NIST SP 800-190 which implements controls listed in National Institute of Standards and Technology (NIST) Special Publication (SP) 800-190 , Application Container Security Guide. The bundle is intended to help organizations with application container security including image security, container runtime security, network security and host system security to name a few.
This page contains instructions for manually applying a policy bundle. Alternatively, you can apply policy bundles directly .
This page is for IT administrators and Operators who want to ensure that all resources running within the cloud platform meet organizational compliance requirements by providing and maintaining automation to audit or enforce.
[!Note] This bundle has not been certified by NIST.
NIST SP 800-190 policy bundle constraints
| Constraint Name | Constraint Description | Control ID |
|---|---|---|
| nist-sp-800-190-apparmor | Restricts AppArmor profiles allowed for Pods. | CM-3 Configuration Change Control |
| nist-sp-800-190-block-secrets-of-type-basic-auth | Restricts the use of basic-auth type secrets. | |
| nist-sp-800-190-capabilities | Restricts additional Capabilities allowed for Pods. | |
| nist-sp-800-190-enforce-config-management | Requires Config Sync is running and Drift Prevention enabled with at least one RootSync object on the cluster. | |
| nist-sp-800-190-host-namespaces | Restricts containers with hostPID or hostIPC set to true. | |
| nist-sp-800-190-host-network | Restricts containers from running with the hostNetwork flag set to true. | |
| nist-sp-800-190-privileged-containers | Restricts containers with securityContext.privileged set to true. | |
| nist-sp-800-190-proc-mount-type | Requires the default /proc masks for Pods | |
| nist-sp-800-190-require-managed-by-label | Requires all apps have a valid app.kubernetes.io/managed-by label. | |
| nist-sp-800-190-restrict-hostpath-volumes | Restricts the use of HostPath volumes. | |
| nist-sp-800-190-restrict-volume-types | Restricts the mountable volumes types to the allowed list. | |
| nist-sp-800-190-seccomp | Seccomp profile must not be explicitly set to Unconfined. | |
| nist-sp-800-190-selinux | Restricts the SELinux configuration for Pods. | |
| nist-sp-800-190-sysctls | Restricts the allowed Sysctls for Pods. | |
| nist-sp-800-190-apparmor | Restricts AppArmor profiles allowed for Pods. | CM-7 Least Functionality |
| nist-sp-800-190-capabilities | Restricts additional Capabilities allowed for Pods. | |
| nist-sp-800-190-host-namespaces | Restricts containers with hostPID or hostIPC set to true. | |
| nist-sp-800-190-host-network | Restricts containers from running with the hostNetwork flag set to true. | |
| nist-sp-800-190-privileged-containers | Restricts containers with securityContext.privileged set to true. | |
| nist-sp-800-190-proc-mount-type | Requires the default /proc masks for Pods | |
| nist-sp-800-190-restrict-clusteradmin-rolebindings | Restricts the use of the cluster-admin role. | |
| nist-sp-800-190-restrict-hostpath-volumes | Restricts the use of HostPath volumes. | |
| nist-sp-800-190-restrict-volume-types | Restricts the mountable volumes types to the allowed list. | |
| nist-sp-800-190-seccomp | Seccomp profile must not be explicitly set to Unconfined. | |
| nist-sp-800-190-selinux | Restricts the SELinux configuration for Pods. | |
| nist-sp-800-190-sysctls | Restricts the allowed Sysctls for Pods. | |
| nist-sp-800-190-asm-peer-authn-strict-mtls | Ensures PeerAuthentications cannot overwrite strict mTLS. | SC-8 Transmission Confidentiality and Integrity |
| nist-sp-800-190-block-creation-with-default-serviceaccount | Restrict resource creation using a default service account. | IA-4 Identifier Management |
| nist-sp-800-190-restrict-rbac-subjects | Restricts the use of names in RBAC subjects to permitted values. | |
| nist-sp-800-190-block-secrets-of-type-basic-auth | Restricts the use of basic-auth type secrets. | SI-7 Software, Firmware, and Information Integrity |
| nist-sp-800-190-require-managed-by-label | Requires all apps have a valid app.kubernetes.io/managed-by label. | |
| nist-sp-800-190-restrict-hostpath-volumes | Restricts the use of HostPath volumes. | |
| nist-sp-800-190-block-secrets-of-type-basic-auth | Restricts the use of basic-auth type secrets. | CM-6 Configuration Settings |
| nist-sp-800-190-require-managed-by-label | Requires all apps have a valid app.kubernetes.io/managed-by label. | |
| nist-sp-800-190-restrict-hostpath-volumes | Restricts the use of HostPath volumes. | |
| nist-sp-800-190-restrict-volume-types | Restricts the mountable volumes types to the allowed list. | |
| nist-sp-800-190-block-secrets-of-type-basic-auth | Restricts the use of basic-auth type secrets. | AC-4 Information Flow Enforcement |
| nist-sp-800-190-require-namespace-network-policies | Requires that every namespace defined in the cluster has a NetworkPolicy. | |
| nist-sp-800-190-restrict-hostpath-volumes | Restricts the use of HostPath volumes. | |
| nist-sp-800-190-cpu-and-memory-limits-required | Requires Pods specify cpu and memory limits. | SC-6 Resource Availability |
| nist-sp-800-190-require-managed-by-label | Requires all apps have a valid app.kubernetes.io/managed-by label. | |
| nist-sp-800-190-require-managed-by-label | Requires all apps have a valid app.kubernetes.io/managed-by label. | |
| nist-sp-800-190-require-managed-by-label | Requires all apps have a valid app.kubernetes.io/managed-by label. | |
| nist-sp-800-190-require-managed-by-label | Requires all apps have a valid app.kubernetes.io/managed-by label. | |
| nist-sp-800-190-require-managed-by-label | Requires all apps have a valid app.kubernetes.io/managed-by label. | |
| nist-sp-800-190-restrict-clusteradmin-rolebindings | Restricts the use of the cluster-admin role. | |
| nist-sp-800-190-require-managed-by-label | Requires all apps have a valid app.kubernetes.io/managed-by label. | |
| nist-sp-800-190-restrict-clusteradmin-rolebindings | Restricts the use of the cluster-admin role. | |
| nist-sp-800-190-nodes-have-consistent-time | Ensures consistent and correct time on Nodes by allowing only Container-Optimized OS(COS) or Ubuntu as the OS image. | AU-8 Time Stamps |
| nist-sp-800-190-require-binauthz | Requires the Binary Authorization Validating Admission Webhook. | AC-6 Least Privilege |
| nist-sp-800-190-restrict-clusteradmin-rolebindings | Restricts the use of the cluster-admin role. | |
| nist-sp-800-190-restrict-repos | Restricts container images to an allowed repos list. | |
| nist-sp-800-190-restrict-role-wildcards | Restricts the use of wildcards in Roles and ClusterRoles. | |
| nist-sp-800-190-restrict-rbac-subjects | Restricts the use of names in RBAC subjects to permitted values. | |
| nist-sp-800-190-require-namespace-network-policies | Requires that every namespace defined in the cluster has a NetworkPolicy. | CA-9 Internal System Connections |
| nist-sp-800-190-require-namespace-network-policies | Requires that every namespace defined in the cluster has a NetworkPolicy. | SC-4 Information in Shared Resources |
| nist-sp-800-190-restrict-rbac-subjects | Restricts the use of names in RBAC subjects to permitted values. | AC-2 Account Management |
| nist-sp-800-190-restrict-rbac-subjects | Restricts the use of names in RBAC subjects to permitted values. | AC-3 Access Enforcement |
| nist-sp-800-190-restrict-rbac-subjects | Restricts the use of names in RBAC subjects to permitted values. | IA-2 Identification and Authentication (Organizational Users) |
| nist-sp-800-190-restrict-rbac-subjects | Restricts the use of names in RBAC subjects to permitted values. | MA-4 Nonlocal Maintenance |
Before you begin
- Install Policy Controller on your cluster with the default library of constraint templates.
Configure your cluster and workload
- Container images are limited to an allowed repos list, which can be customized if required in
nist-sp-800-190-restrict-repos. - Nodes must use Ubuntu for their image in
nist-sp-800-190-nodes-have-consistent-time.
Audit NIST SP 800-190 policy bundle
Policy Controller lets you enforce policies for your Kubernetes cluster. To help test your workloads and their compliance with regard to the NIST policies outlined in the preceding table, you can deploy these constraints in “audit” mode to reveal violations and more importantly give yourself a chance to fix them before enforcing on your Kubernetes cluster.
You can apply these policies with spec.enforcementAction set to dryrun using kubectl.
- (Optional) Preview the policy constraints with kubectl:
kubectl kustomize https://github.com/GoogleCloudPlatform/gke-policy-library.git/anthos-bundles/nist-sp-800-190
- Apply the policy constraints with kubectl:
kubectl apply -k https://github.com/GoogleCloudPlatform/gke-policy-library.git/anthos-bundles/nist-sp-800-190
The output is the following:
asmpeerauthnstrictmtls.constraints.gatekeeper.sh/nist-sp-800-190-asm-peer-authn-strict-mtls created
k8sallowedrepos.constraints.gatekeeper.sh/nist-sp-800-190-restrict-repos created
k8sblockcreationwithdefaultserviceaccount.constraints.gatekeeper.sh/nist-sp-800-190-block-creation-with-default-serviceaccount created
k8sblockobjectsoftype.constraints.gatekeeper.sh/nist-sp-800-190-block-secrets-of-type-basic-auth created
k8spspapparmor.constraints.gatekeeper.sh/nist-sp-800-190-apparmor created
k8spspcapabilities.constraints.gatekeeper.sh/nist-sp-800-190-capabilities created
k8spspforbiddensysctls.constraints.gatekeeper.sh/nist-sp-800-190-sysctls created
k8spsphostfilesystem.constraints.gatekeeper.sh/nist-sp-800-190-restrict-hostpath-volumes created
k8spsphostnamespace.constraints.gatekeeper.sh/nist-sp-800-190-host-namespaces created
k8spsphostnetworkingports.constraints.gatekeeper.sh/nist-sp-800-190-host-network created
k8spspprivilegedcontainer.constraints.gatekeeper.sh/nist-sp-800-190-privileged-containers created
k8spspprocmount.constraints.gatekeeper.sh/nist-sp-800-190-proc-mount-type created
k8spspselinuxv2.constraints.gatekeeper.sh/nist-sp-800-190-selinux created
k8spspseccomp.constraints.gatekeeper.sh/nist-sp-800-190-seccomp created
k8spspvolumetypes.constraints.gatekeeper.sh/nist-sp-800-190-restrict-volume-types created
k8sprohibitrolewildcardaccess.constraints.gatekeeper.sh/nist-sp-800-190-restrict-role-wildcards created
k8srequirecosnodeimage.constraints.gatekeeper.sh/nist-sp-800-190-nodes-have-consistent-time created
k8srequirenamespacenetworkpolicies.constraints.gatekeeper.sh/nist-sp-800-190-require-namespace-network-policies created
k8srequiredlabels.constraints.gatekeeper.sh/nist-sp-800-190-require-managed-by-label created
k8srequiredresources.constraints.gatekeeper.sh/nist-sp-800-190-cpu-and-memory-limits-required created
k8srestrictrbacsubjects.constraints.gatekeeper.sh/nist-sp-800-190-restrict-rbac-subjects created
k8srestrictrolebindings.constraints.gatekeeper.sh/nist-sp-800-190-restrict-clusteradmin-rolebindings created
- Verify that policy constraints have been installed and check if violations exist across the cluster:
kubectl get constraints -l bundleName=nist-sp-800-190
The output is similar to the following:
NAME ENFORCEMENT-ACTION TOTAL-VIOLATIONS
k8spspapparmor.constraints.gatekeeper.sh/nist-sp-800-190-apparmor dryrun 0
NAME ENFORCEMENT-ACTION TOTAL-VIOLATIONS
k8srestrictrbacsubjects.constraints.gatekeeper.sh/nist-sp-800-190-restrict-rbac-subjects dryrun 0
NAME ENFORCEMENT-ACTION TOTAL-VIOLATIONS
k8spspforbiddensysctls.constraints.gatekeeper.sh/nist-sp-800-190-sysctls dryrun 0
NAME ENFORCEMENT-ACTION TOTAL-VIOLATIONS
k8spspvolumetypes.constraints.gatekeeper.sh/nist-sp-800-190-restrict-volume-types dryrun 0
NAME ENFORCEMENT-ACTION TOTAL-VIOLATIONS
k8spsphostnetworkingports.constraints.gatekeeper.sh/nist-sp-800-190-host-network dryrun 0
NAME ENFORCEMENT-ACTION TOTAL-VIOLATIONS
k8spsphostnamespace.constraints.gatekeeper.sh/nist-sp-800-190-host-namespaces dryrun 0
NAME ENFORCEMENT-ACTION TOTAL-VIOLATIONS
k8sprohibitrolewildcardaccess.constraints.gatekeeper.sh/nist-sp-800-190-restrict-role-wildcards dryrun 0
NAME ENFORCEMENT-ACTION TOTAL-VIOLATIONS
k8sblockcreationwithdefaultserviceaccount.constraints.gatekeeper.sh/nist-sp-800-190-block-creation-with-default-serviceaccount dryrun 0
NAME ENFORCEMENT-ACTION TOTAL-VIOLATIONS
k8spsphostfilesystem.constraints.gatekeeper.sh/nist-sp-800-190-restrict-hostpath-volumes dryrun 0
NAME ENFORCEMENT-ACTION TOTAL-VIOLATIONS
k8spspseccomp.constraints.gatekeeper.sh/nist-sp-800-190-seccomp dryrun 0
NAME ENFORCEMENT-ACTION TOTAL-VIOLATIONS
asmpeerauthnstrictmtls.constraints.gatekeeper.sh/nist-sp-800-190-asm-peer-authn-strict-mtls dryrun 0
NAME ENFORCEMENT-ACTION TOTAL-VIOLATIONS
k8srequiredresources.constraints.gatekeeper.sh/nist-sp-800-190-cpu-and-memory-limits-required dryrun 0
NAME ENFORCEMENT-ACTION TOTAL-VIOLATIONS
k8spspprocmount.constraints.gatekeeper.sh/nist-sp-800-190-proc-mount-type dryrun 0
NAME ENFORCEMENT-ACTION TOTAL-VIOLATIONS
k8sblockobjectsoftype.constraints.gatekeeper.sh/nist-sp-800-190-block-secrets-of-type-basic-auth dryrun 0
NAME ENFORCEMENT-ACTION TOTAL-VIOLATIONS
k8srequirenamespacenetworkpolicies.constraints.gatekeeper.sh/nist-sp-800-190-require-namespace-network-policies dryrun 0
NAME ENFORCEMENT-ACTION TOTAL-VIOLATIONS
k8spspcapabilities.constraints.gatekeeper.sh/nist-sp-800-190-capabilities dryrun 0
NAME ENFORCEMENT-ACTION TOTAL-VIOLATIONS
k8srequiredlabels.constraints.gatekeeper.sh/nist-sp-800-190-require-managed-by-label dryrun 0
NAME ENFORCEMENT-ACTION TOTAL-VIOLATIONS
k8spspprivilegedcontainer.constraints.gatekeeper.sh/nist-sp-800-190-privileged-containers dryrun 0
NAME ENFORCEMENT-ACTION TOTAL-VIOLATIONS
k8sallowedrepos.constraints.gatekeeper.sh/nist-sp-800-190-restrict-repos dryrun 0
NAME ENFORCEMENT-ACTION TOTAL-VIOLATIONS
k8spspselinuxv2.constraints.gatekeeper.sh/nist-sp-800-190-selinux dryrun 0
NAME ENFORCEMENT-ACTION TOTAL-VIOLATIONS
k8srestrictrolebindings.constraints.gatekeeper.sh/nist-sp-800-190-restrict-clusteradmin-rolebindings dryrun 0
View policy violations
Once the policy constraints are installed in audit mode, violations on the cluster can be viewed in the UI using the Policy Controller Dashboard.
You can also use kubectl to view violations on the cluster using the following command:
kubectl get constraint -l bundleName=nist-sp-800-190 -o json | jq -cC '.items[]| [.metadata.name,.status.totalViolations]'
If violations are present, a listing of the violation messages per constraint can be viewed with:
kubectl get constraint -l bundleName=nist-sp-800-190 -o json | jq -C '.items[]| select(.status.totalViolations>0)| [.metadata.name,.status.violations[]?]''
Change NIST SP 800-190 policy bundle enforcement action
Once you’ve reviewed policy violations on your cluster, you can consider changing the enforcement mode so the Admission Controller will either warn on or even deny block non-compliant resource from getting applied to the cluster.
[!WARNING] The deny enforcement action should be used with care as it can potentially block required changes resulting in interruption to critical workloads or the cluster. It is strongly recommended that only the
warnordryrunenforcement actions are used on clusters with production workloads, when testing new constraints, or performing migrations such as upgrading platforms. For more information about enforcement actions, see Auditing using constraints .
- Use kubectl to set the policies' enforcement action to
warn:
kubectl get constraints -l bundleName=nist-sp-800-190 -o name | xargs -I {} kubectl patch {} --type='json' -p='[{"op":"replace","path":"/spec/enforcementAction","value":"warn"}]'
- Verify that policy constraints enforcement action have been updated:
kubectl get constraints -l bundleName=nist-sp-800-190
Test policy enforcement
Create a non-compliant resource on the cluster using the following command:
cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: Pod
metadata:
name: wp-non-compliant
spec:
containers:
‐ image: wordpress
name: wordpress
EOF
The admission controller should produce a warning listing out the policy violations that this resource violates, as shown in the following example:
Warning: [nist-sp-800-190-cpu-and-memory-limits-required] container <wordpress> does not have <{"cpu", "memory"}> limits defined
Warning: [nist-sp-800-190-restrict-repos] container <wordpress> has an invalid image repo <wordpress>, allowed repos are ["gcr.io/gke-release/", "gcr.io/anthos-baremetal-release/", "gcr.io/config-management-release/", "gcr.io/kubebuilder/", "gcr.io/gkeconnect/", "gke.gcr.io/"]
pod/wp-non-compliant created
Remove NIST SP 800-190 policy bundle
If needed, the NIST SP 800-190 policy bundle can be removed from the cluster.
- Use kubectl to remove the policies:
kubectl delete constraint -l bundleName=nist-sp-800-190
Use NIST SP 800-53 Rev. 5 policy constraints
Policy Controller comes with a default library of constraint templates that can be used with the NIST SP 800-53 Rev. 5 bundle which implements controls listed in National Institute of Standards and Technology (NIST) Special Publication (SP) 800-53 Rev. 5 . The bundle may help organizations protect their systems and data from a variety of threats by implementing out-of-the-box security and privacy policies.
This page contains instructions for manually applying a policy bundle. Alternatively, you can apply policy bundles directly .
Important: This page is for IT administrators and Operators who want to ensure that all resources running within the cloud platform meet organizational compliance requirements by providing and maintaining automation to audit or enforce.
Note: This bundle has not been certified by NIST.
NIST SP 800-53 Rev. 5 policy bundle constraints
| Constraint Name | Constraint Description | Control ID |
|---|---|---|
| nist-sp-800-53-r5-apparmor | Restricts AppArmor profiles allowed for Pods. | CM-3 Configuration Change Control |
| nist-sp-800-53-r5-block-secrets-of-type-basic-auth | Restricts the use of basic-auth type secrets. | |
| nist-sp-800-53-r5-capabilities | Restricts additional Capabilities allowed for Pods. | |
| nist-sp-800-53-r5-host-namespaces | Restricts containers with hostPID or hostIPC set to true. | |
| nist-sp-800-53-r5-host-network | Restricts containers from running with the hostNetwork flag set to true. | |
| nist-sp-800-53-r5-privileged-containers | Restricts containers with securityContext.privileged set to true. | |
| nist-sp-800-53-r5-proc-mount-type | Requires the default /proc masks for Pods | |
| nist-sp-800-53-r5-require-managed-by-label | Requires all apps have a valid app.kubernetes.io/managed-by label. | |
| nist-sp-800-53-r5-restrict-hostpath-volumes | Restricts the use of HostPath volumes. | |
| nist-sp-800-53-r5-restrict-volume-types | Restricts the mountable volumes types to the allowed list. | |
| nist-sp-800-53-r5-seccomp | Seccomp profile must not be explicitly set to Unconfined. | |
| nist-sp-800-53-r5-selinux | Restricts the SELinux configuration for Pods. | |
| nist-sp-800-53-r5-sysctls | Restricts the allowed Sysctls for Pods. | |
| nist-sp-800-53-r5-apparmor | Restricts AppArmor profiles allowed for Pods. | CM-7 Least Functionality |
| nist-sp-800-53-r5-capabilities | Restricts additional Capabilities allowed for Pods. | |
| nist-sp-800-53-r5-host-namespaces | Restricts containers with hostPID or hostIPC set to true. | |
| nist-sp-800-53-r5-host-network | Restricts containers from running with the hostNetwork flag set to true. | |
| nist-sp-800-53-r5-privileged-containers | Restricts containers with securityContext.privileged set to true. | |
| nist-sp-800-53-r5-proc-mount-type | Requires the default /proc masks for Pods | |
| nist-sp-800-53-r5-restrict-clusteradmin-rolebindings | Restricts the use of the cluster-admin role. | |
| nist-sp-800-53-r5-restrict-hostpath-volumes | Restricts the use of HostPath volumes. | |
| nist-sp-800-53-r5-restrict-volume-types | Restricts the mountable volumes types to the allowed list. | |
| nist-sp-800-53-r5-seccomp | Seccomp profile must not be explicitly set to Unconfined. | |
| nist-sp-800-53-r5-selinux | Restricts the SELinux configuration for Pods. | |
| nist-sp-800-53-r5-sysctls | Restricts the allowed Sysctls for Pods. | |
| nist-sp-800-53-r5-asm-peer-authn-strict-mtls | Ensures PeerAuthentications cannot overwrite strict mTLS. | SC-8 Transmission Confidentiality and Integrity |
| nist-sp-800-53-r5-block-creation-with-default-serviceaccount | Restrict resource creation using a default service account. | IA-4 Identifier Management |
| nist-sp-800-53-r5-restrict-rbac-subjects | Restricts the use of names in RBAC subjects to permitted values. | |
| nist-sp-800-53-r5-block-secrets-of-type-basic-auth | Restricts the use of basic-auth type secrets. | SI-7 Software, Firmware, and Information Integrity |
| nist-sp-800-53-r5-require-binauthz | Requires the Binary Authorization Validating Admission Webhook. | |
| nist-sp-800-53-r5-require-managed-by-label | Requires all apps have a valid app.kubernetes.io/managed-by label. | |
| nist-sp-800-53-r5-restrict-hostpath-volumes | Restricts the use of HostPath volumes. | |
| nist-sp-800-53-r5-block-secrets-of-type-basic-auth | Restricts the use of basic-auth type secrets. | CM-6 Configuration Settings |
| nist-sp-800-53-r5-require-binauthz | Requires the Binary Authorization Validating Admission Webhook. | |
| nist-sp-800-53-r5-require-managed-by-label | Requires all apps have a valid app.kubernetes.io/managed-by label. | |
| nist-sp-800-53-r5-restrict-hostpath-volumes | Restricts the use of HostPath volumes. | |
| nist-sp-800-53-r5-restrict-volume-types | Restricts the mountable volumes types to the allowed list. | |
| nist-sp-800-53-r5-block-secrets-of-type-basic-auth | Restricts the use of basic-auth type secrets. | SC-7 Boundary Protection |
| nist-sp-800-53-r5-require-managed-by-label | Requires all apps have a valid app.kubernetes.io/managed-by label. | |
| nist-sp-800-53-r5-require-namespace-network-policies | Requires that every namespace defined in the cluster has a NetworkPolicy. | |
| nist-sp-800-53-r5-restrict-hostpath-volumes | Restricts the use of HostPath volumes. | |
| nist-sp-800-53-r5-restrict-volume-types | Restricts the mountable volumes types to the allowed list. | |
| nist-sp-800-53-r5-block-secrets-of-type-basic-auth | Restricts the use of basic-auth type secrets. | AC-4 Information Flow Enforcement |
| nist-sp-800-53-r5-require-binauthz | Requires the Binary Authorization Validating Admission Webhook. | |
| nist-sp-800-53-r5-require-namespace-network-policies | Requires that every namespace defined in the cluster has a NetworkPolicy. | |
| nist-sp-800-53-r5-restrict-hostpath-volumes | Restricts the use of HostPath volumes. | |
| nist-sp-800-53-r5-block-secrets-of-type-basic-auth | Restricts the use of basic-auth type secrets. | AC-16 Security and Privacy Attributes |
| nist-sp-800-53-r5-restrict-hostpath-volumes | Restricts the use of HostPath volumes. | |
| nist-sp-800-53-r5-block-secrets-of-type-basic-auth | Restricts the use of basic-auth type secrets. | SA-8 Security and Privacy Engineering Principles |
| nist-sp-800-53-r5-restrict-hostpath-volumes | Restricts the use of HostPath volumes. | |
| nist-sp-800-53-r5-restrict-volume-types | Restricts the mountable volumes types to the allowed list. | |
| nist-sp-800-53-r5-cpu-and-memory-limits-required | Requires Pods specify cpu and memory limits. | SC-6 Resource Availability |
| nist-sp-800-53-r5-require-managed-by-label | Requires all apps have a valid app.kubernetes.io/managed-by label. | |
| nist-sp-800-53-r5-require-managed-by-label | Requires all apps have a valid app.kubernetes.io/managed-by label. | |
| nist-sp-800-53-r5-require-managed-by-label | Requires all apps have a valid app.kubernetes.io/managed-by label. | |
| nist-sp-800-53-r5-require-managed-by-label | Requires all apps have a valid app.kubernetes.io/managed-by label. | |
| nist-sp-800-53-r5-require-managed-by-label | Requires all apps have a valid app.kubernetes.io/managed-by label. | |
| nist-sp-800-53-r5-require-managed-by-label | Requires all apps have a valid app.kubernetes.io/managed-by label. | |
| nist-sp-800-53-r5-restrict-clusteradmin-rolebindings | Restricts the use of the cluster-admin role. | |
| nist-sp-800-53-r5-require-managed-by-label | Requires all apps have a valid app.kubernetes.io/managed-by label. | |
| nist-sp-800-53-r5-restrict-clusteradmin-rolebindings | Restricts the use of the cluster-admin role. | |
| nist-sp-800-53-r5-nodes-have-consistent-time | Ensures consistent and correct time on Nodes by allowing only Container-Optimized OS (COS) or Ubuntu as the OS image. | AU-8 Time Stamps |
| nist-sp-800-53-r5-require-av-daemonset | Requires the presence of an Anti-Virus daemonset. | SI-3 Malicious Code Protection |
| nist-sp-800-53-r5-restrict-clusteradmin-rolebindings | Restricts the use of the cluster-admin role. | |
| nist-sp-800-53-r5-restrict-repos | Restricts container images to an allowed repos list. | |
| nist-sp-800-53-r5-restrict-role-wildcards | Restricts the use of wildcards in Roles and ClusterRoles. | |
| nist-sp-800-53-r5-restrict-rbac-subjects | Restricts the use of names in RBAC subjects to permitted values. | |
| nist-sp-800-53-r5-restrict-storageclass | Restricts StorageClass to a list of StorageClass which encrypt by default. | |
| nist-sp-800-53-r5-require-namespace-network-policies | Requires that every namespace defined in the cluster has a NetworkPolicy. | CA-9 Internal System Connections |
| nist-sp-800-53-r5-require-namespace-network-policies | Requires that every namespace defined in the cluster has a NetworkPolicy. | SC-4 Information in Shared Resources |
| nist-sp-800-53-r5-restrict-rbac-subjects | Restricts the use of names in RBAC subjects to permitted values. | AC-2 Account Management |
| nist-sp-800-53-r5-restrict-rbac-subjects | Restricts the use of names in RBAC subjects to permitted values. | AC-3 Access Enforcement |
| nist-sp-800-53-r5-restrict-rbac-subjects | Restricts the use of names in RBAC subjects to permitted values. | IA-2 Identification and Authentication (Organizational Users) |
| nist-sp-800-53-r5-restrict-rbac-subjects | Restricts the use of names in RBAC subjects to permitted values. | MA-4 Nonlocal Maintenance |
| nist-sp-800-53-r5-restrict-storageclass | Restricts StorageClass to a list of StorageClass which encrypt by default. | SC-28 Protection of Information at Rest |
Before you begin
- Install Policy Controller on your cluster with the default library of constraint templates.
Configure your cluster and workload
- An antivirus solution is required. The default is the presence of a
daemonsetnamedclamavin theclamavnamespace, however thedaemonset’s name and namespace can be customized to your implementation in thenist-sp-800-53-r5-require-av-daemonsetconstraint. - Container images are limited to an allowed repos list, which can be customized if required in
nist-sp-800-53-r5-restrict-repos. - Nodes must use Ubuntu for their image in
nist-sp-800-53-r5-nodes-have-consistent-time. - Use of storage classes is limited to an allowed list, which can be customized to add additional classes with default encryption in
nist-sp-800-53-r5-restrict-storageclass.
Audit NIST SP 800-53 Rev. 5 policy bundle
Policy Controller lets you enforce policies for your Kubernetes cluster. To help test your workloads and their compliance with regard to the NIST policies outlined in the preceding table, you can deploy these constraints in “audit” mode to reveal violations and more importantly give yourself a chance to fix them before enforcing on your Kubernetes cluster.
You can apply these policies with spec.enforcementAction set to dryrun using kubectl.
- (Optional) Preview the policy constraints with kubectl:
kubectl kustomize https://github.com/GoogleCloudPlatform/gke-policy-library.git/anthos-bundles/nist-sp-800-53-r5
- Apply the policy constraints with kubectl:
kubectl apply -k https://github.com/GoogleCloudPlatform/gke-policy-library.git/anthos-bundles/nist-sp-800-53-r5
The output is similar to the following: Click to expand output
asmpeerauthnstrictmtls.constraints.gatekeeper.sh/nist-sp-800-53-r5-asm-peer-authn-strict-mtls created
k8sallowedrepos.constraints.gatekeeper.sh/nist-sp-800-53-r5-restrict-repos created
k8sblockcreationwithdefaultserviceaccount.constraints.gatekeeper.sh/nist-sp-800-53-r5-block-creation-with-default-serviceaccount created
k8sblockobjectsoftype.constraints.gatekeeper.sh/nist-sp-800-53-r5-block-secrets-of-type-basic-auth created
k8spspapparmor.constraints.gatekeeper.sh/nist-sp-800-53-r5-apparmor created
k8spspcapabilities.constraints.gatekeeper.sh/nist-sp-800-53-r5-capabilities created
k8spspforbiddensysctls.constraints.gatekeeper.sh/nist-sp-800-53-r5-sysctls created
k8spsphostfilesystem.constraints.gatekeeper.sh/nist-sp-800-53-r5-restrict-hostpath-volumes created
k8spsphostnamespace.constraints.gatekeeper.sh/nist-sp-800-53-r5-host-namespaces created
k8spsphostnetworkingports.constraints.gatekeeper.sh/nist-sp-800-53-r5-host-network created
k8spspprivilegedcontainer.constraints.gatekeeper.sh/nist-sp-800-53-r5-privileged-containers created
k8spspprocmount.constraints.gatekeeper.sh/nist-sp-800-53-r5-proc-mount-type created
k8spspselinuxv2.constraints.gatekeeper.sh/nist-sp-800-53-r5-selinux created
k8spspseccomp.constraints.gatekeeper.sh/nist-sp-800-53-r5-seccomp created
k8spspvolumetypes.constraints.gatekeeper.sh/nist-sp-800-53-r5-restrict-volume-types created
k8sprohibitrolewildcardaccess.constraints.gatekeeper.sh/nist-sp-800-53-r5-restrict-role-wildcards created
k8srequirecosnodeimage.constraints.gatekeeper.sh/nist-sp-800-53-r5-nodes-have-consistent-time created
k8srequiredaemonsets.constraints.gatekeeper.sh/nist-sp-800-53-r5-require-av-daemonset created
k8srequirenamespacenetworkpolicies.constraints.gatekeeper.sh/nist-sp-800-53-r5-require-namespace-network-policies created
k8srequiredlabels.constraints.gatekeeper.sh/nist-sp-800-53-r5-require-managed-by-label created
k8srequiredresources.constraints.gatekeeper.sh/nist-sp-800-53-r5-cpu-and-memory-limits-required created
k8srestrictrbacsubjects.constraints.gatekeeper.sh/nist-sp-800-53-r5-restrict-rbac-subjects created
k8srestrictrolebindings.constraints.gatekeeper.sh/nist-sp-800-53-r5-restrict-clusteradmin-rolebindings created
k8sstorageclass.constraints.gatekeeper.sh/nist-sp-800-53-r5-restrict-storageclass created
- Verify that policy constraints have been installed and check if violations exist across the cluster:
kubectl get constraints -l bundleName=nist-sp-800-53-r5
The output is similar to the following: Click to expand output
NAME ENFORCEMENT-ACTION TOTAL-VIOLATIONS
k8spspapparmor.constraints.gatekeeper.sh/nist-sp-800-53-r5-apparmor dryrun 0
NAME ENFORCEMENT-ACTION TOTAL-VIOLATIONS
k8srestrictrbacsubjects.constraints.gatekeeper.sh/nist-sp-800-53-r5-restrict-rbac-subjects dryrun 0
NAME ENFORCEMENT-ACTION TOTAL-VIOLATIONS
k8spspforbiddensysctls.constraints.gatekeeper.sh/nist-sp-800-53-r5-sysctls dryrun 0
NAME ENFORCEMENT-ACTION TOTAL-VIOLATIONS
k8spspvolumetypes.constraints.gatekeeper.sh/nist-sp-800-53-r5-restrict-volume-types dryrun 0
NAME ENFORCEMENT-ACTION TOTAL-VIOLATIONS
k8spsphostnetworkingports.constraints.gatekeeper.sh/nist-sp-800-53-r5-host-network dryrun 0
NAME ENFORCEMENT-ACTION TOTAL-VIOLATIONS
k8spsphostnamespace.constraints.gatekeeper.sh/nist-sp-800-53-r5-host-namespaces dryrun 0
NAME ENFORCEMENT-ACTION TOTAL-VIOLATIONS
k8srequiredaemonsets.constraints.gatekeeper.sh/nist-sp-800-53-r5-require-av-daemonset dryrun 0
NAME ENFORCEMENT-ACTION TOTAL-VIOLATIONS
k8sprohibitrolewildcardaccess.constraints.gatekeeper.sh/nist-sp-800-53-r5-restrict-role-wildcards dryrun 0
NAME ENFORCEMENT-ACTION TOTAL-VIOLATIONS
k8sblockcreationwithdefaultserviceaccount.constraints.gatekeeper.sh/nist-sp-800-53-r5-block-creation-with-default-serviceaccount dryrun 0
NAME ENFORCEMENT-ACTION TOTAL-VIOLATIONS
k8spsphostfilesystem.constraints.gatekeeper.sh/nist-sp-800-53-r5-restrict-hostpath-volumes dryrun 0
NAME ENFORCEMENT-ACTION TOTAL-VIOLATIONS
k8spspseccomp.constraints.gatekeeper.sh/nist-sp-800-53-r5-seccomp dryrun 0
NAME ENFORCEMENT-ACTION TOTAL-VIOLATIONS
asmpeerauthnstrictmtls.constraints.gatekeeper.sh/nist-sp-800-53-r5-asm-peer-authn-strict-mtls dryrun 0
NAME ENFORCEMENT-ACTION TOTAL-VIOLATIONS
k8srequiredresources.constraints.gatekeeper.sh/nist-sp-800-53-r5-cpu-and-memory-limits-required dryrun 0
NAME ENFORCEMENT-ACTION TOTAL-VIOLATIONS
k8spspprocmount.constraints.gatekeeper.sh/nist-sp-800-53-r5-proc-mount-type dryrun 0
NAME ENFORCEMENT-ACTION TOTAL-VIOLATIONS
k8sblockobjectsoftype.constraints.gatekeeper.sh/nist-sp-800-53-r5-block-secrets-of-type-basic-auth dryrun 0
NAME ENFORCEMENT-ACTION TOTAL-VIOLATIONS
k8srequirenamespacenetworkpolicies.constraints.gatekeeper.sh/nist-sp-800-53-r5-require-namespace-network-policies dryrun 0
NAME ENFORCEMENT-ACTION TOTAL-VIOLATIONS
k8spspcapabilities.constraints.gatekeeper.sh/nist-sp-800-53-r5-capabilities dryrun 0
NAME ENFORCEMENT-ACTION TOTAL-VIOLATIONS
k8sstorageclass.constraints.gatekeeper.sh/nist-sp-800-53-r5-restrict-storageclass dryrun 0
NAME ENFORCEMENT-ACTION TOTAL-VIOLATIONS
k8srequiredlabels.constraints.gatekeeper.sh/nist-sp-800-53-r5-require-managed-by-label dryrun 0
NAME ENFORCEMENT-ACTION TOTAL-VIOLATIONS
k8spspprivilegedcontainer.constraints.gatekeeper.sh/nist-sp-800-53-r5-privileged-containers dryrun 0
NAME ENFORCEMENT-ACTION TOTAL-VIOLATIONS
k8sallowedrepos.constraints.gatekeeper.sh/nist-sp-800-53-r5-restrict-repos dryrun 0
NAME ENFORCEMENT-ACTION TOTAL-VIOLATIONS
k8spspselinuxv2.constraints.gatekeeper.sh/nist-sp-800-53-r5-selinux dryrun 0
NAME ENFORCEMENT-ACTION TOTAL-VIOLATIONS
k8srestrictrolebindings.constraints.gatekeeper.sh/nist-sp-800-53-r5-restrict-clusteradmin-rolebindings dryrun 0
NAME ENFORCEMENT-ACTION TOTAL-VIOLATIONS
k8srequirecosnodeimage.constraints.gatekeeper.sh/nist-sp-800-53-r5-nodes-have-consistent-time dryrun 0
View policy violations
Once the policy constraints are installed in audit mode, violations on the cluster can be viewed in the UI using the Policy Controller Dashboard.
You can also use kubectl to view violations on the cluster using the following command:
kubectl get constraint -l bundleName=nist-sp-800-53-r5 -o json | jq -cC '.items[]| [.metadata.name,.status.totalViolations]'
If violations are present, a listing of the violation messages per constraint can be viewed with:
kubectl get constraint -l bundleName=nist-sp-800-53-r5 -o json | jq -C '.items[]| select(.status.totalViolations>0)| [.metadata.name,.status.violations[]?]'
Change NIST SP 800-53 Rev. 5 policy bundle enforcement action
Once you’ve reviewed policy violations on your cluster, you can consider changing the enforcement mode so the Admission Controller will either warn on or even deny block non-compliant resource from getting applied to the cluster.
[!WARNING] The deny enforcement action should be used with care as it can potentially block required changes resulting in interruption to critical workloads or the cluster. It is strongly recommended that only the
warnordryrunenforcement actions are used on clusters with production workloads, when testing new constraints, or performing migrations such as upgrading platforms. For more information about enforcement actions, see Auditing using constraints .
- Use kubectl to set the policies' enforcement action to
warn:
kubectl get constraints -l bundleName=nist-sp-800-53-r5 -o name | xargs -I {} kubectl patch {} --type='json' -p='[{"op":"replace","path":"/spec/enforcementAction","value":"warn"}]'
- Verify that policy constraints enforcement action have been updated:
kubectl get constraints -l bundleName=nist-sp-800-53-r5
Test policy enforcement
Create a non-compliant resource on the cluster using the following command:
cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: Pod
metadata:
name: wp-non-compliant
spec:
containers:
‐ image: wordpress
name: wordpress
EOF
The admission controller should produce a warning listing out the policy violations that this resource violates, as shown in the following example:
Warning: [nist-sp-800-53-r5-cpu-and-memory-limits-required] container <wordpress> does not have <{"cpu", "memory"}> limits defined
Warning: [nist-sp-800-53-r5-restrict-repos] container <wordpress> has an invalid image repo <wordpress>, allowed repos are ["gcr.io/gke-release/", "gcr.io/anthos-baremetal-release/", "gcr.io/config-management-release/", "gcr.io/kubebuilder/", "gcr.io/gkeconnect/", "gke.gcr.io/"]
pod/wp-non-compliant created
Remove NIST SP 800-53 Rev. 5 policy bundle
If needed, the NIST SP 800-53 Rev. 5 policy bundle can be removed from the cluster.
- Use kubectl to remove the policies:
kubectl delete constraint -l bundleName=nist-sp-800-53-r5
Use NSA CISA Kubernetes Hardening policy constraints
Policy Controller comes with a default library of constraint templates that can be used with the National Security Agency (NSA) Cybersecurity and Infrastructure Security Agency (CISA) Kubernetes Hardening Guide v1.2 Policy bundle to evaluate the compliance of your cluster resources against some aspects of the NSA CISA Kubernetes Hardening Guide v1.2 .
This page contains instructions for manually applying a policy bundle. Alternatively, you can apply policy bundles directly .
This page is for IT administrators and Operators who want to ensure that all resources running within the cloud platform meet organizational compliance requirements by providing and maintaining automation to audit or enforce.
[!Note] This bundle has not been certified by NSA and CISA.
NSA CISA Kubernetes Hardening policy bundle constraints
| Constraint Name | Constraint Description | Control ID |
|---|---|---|
| nsa-cisa-k8s-v1.2-apparmor | Restricts AppArmor profile for Pods. | CM-3 Configuration Change Control |
| nsa-cisa-k8s-v1.2-automount-serviceaccount-token-pod | Restricts Pods from using automountServiceAccountToken. | |
| nsa-cisa-k8s-v1.2-block-all-ingress | Restricts the creation of Ingress objects. | |
| nsa-cisa-k8s-v1.2-block-secrets-of-type-basic-auth | Restricts the use of kubernetes.io/basic-auth type secrets. | |
| nsa-cisa-k8s-v1.2-capabilities | Containers must drop all capabilities, and are not permitted to add back any capabilities. | |
| nsa-cisa-k8s-v1.2-cpu-and-memory-limits-required | All workload pods must specify cpu and memory limits. | |
| nsa-cisa-k8s-v1.2-host-namespaces | Restricts containers with hostPID or hostIPC set to true. | |
| nsa-cisa-k8s-v1.2-host-namespaces-hostnetwork | Sharing the host namespaces must be disallowed. | |
| nsa-cisa-k8s-v1.2-host-network | Restricts containers from running with the hostNetwork flag set to true. | |
| nsa-cisa-k8s-v1.2-hostport | Restricts containers from running with hostPort configured. | |
| nsa-cisa-k8s-v1.2-privilege-escalation | Restricts containers with allowPrivilegeEscalation set to true. | |
| nsa-cisa-k8s-v1.2-privileged-containers | Restricts containers with securityContext.privileged set to true. | |
| nsa-cisa-k8s-v1.2-readonlyrootfilesystem | Requires the use of a read-only root file system by pod containers. | |
| nsa-cisa-k8s-v1.2-require-namespace-network-policies | Requires that every namespace defined in the cluster has a NetworkPolicy. | |
| nsa-cisa-k8s-v1.2-restrict-clusteradmin-rolebindings | Restricts the use of the cluster-admin role. | CM-7 Least Functionality |
| nsa-cisa-k8s-v1.2-restrict-edit-rolebindings | Restricts the use of the edit role. | |
| nsa-cisa-k8s-v1.2-restrict-hostpath-volumes | Restricts the use of HostPath volumes. | |
| nsa-cisa-k8s-v1.2-restrict-pods-exec | Restricts the use of pods/exec in Roles and ClusterRoles. | |
| nsa-cisa-k8s-v1.2-running-as-non-root | Restricts containers from running as the root user. | |
| nsa-cisa-k8s-v1.2-seccomp | Seccomp profile must not be explicitly set to Unconfined. | |
| nsa-cisa-k8s-v1.2-selinux | Cannot set the SELinux type or set a custom SELinux user or role option. |
Before you begin
- Install Policy Controller on your cluster with the default library of constraint templates.
Audit NSA CISA Kubernetes Hardening v1.2 policy bundle
Policy Controller lets you enforce policies for your Kubernetes cluster. To help test your workloads and their compliance with regard to the NSA CISA Kubernetes Hardening Guide v1.2 policies outlined in the preceding table, you can deploy these constraints in “audit” mode to reveal violations and more importantly give yourself a chance to fix them before optionally enforcing on your Kubernetes cluster.
You can apply these policies with spec.enforcementAction set to dryrun using kubectl.
- (Optional) Preview the policy constraints with kubectl:
kubectl kustomize https://github.com/GoogleCloudPlatform/gke-policy-library.git/anthos-bundles/nsa-cisa-k8s-v1.2
- Apply the policy constraints with kubectl:
kubectl apply -k https://github.com/GoogleCloudPlatform/gke-policy-library.git/anthos-bundles/nsa-cisa-k8s-v1.2
The output is the following:
k8sblockallingress.constraints.gatekeeper.sh/nsa-cisa-k8s-v1.2-block-all-ingress created
k8sblockobjectsoftype.constraints.gatekeeper.sh/nsa-cisa-k8s-v1.2-block-secrets-of-type-basic-auth created
k8spspallowprivilegeescalationcontainer.constraints.gatekeeper.sh/nsa-cisa-k8s-v1.2-privilege-escalation created
k8spspallowedusers.constraints.gatekeeper.sh/nsa-cisa-k8s-v1.2-running-as-non-root created
k8spspapparmor.constraints.gatekeeper.sh/nsa-cisa-k8s-v1.2-apparmor created
k8spspautomountserviceaccounttokenpod.constraints.gatekeeper.sh/nsa-cisa-k8s-v1.2-automount-serviceaccount-token-pod created
k8spspcapabilities.constraints.gatekeeper.sh/nsa-cisa-k8s-v1.2-capabilities created
k8spsphostfilesystem.constraints.gatekeeper.sh/nsa-cisa-k8s-v1.2-restrict-hostpath-volumes created
k8spsphostnamespace.constraints.gatekeeper.sh/nsa-cisa-k8s-v1.2-host-namespaces created
k8spsphostnetworkingports.constraints.gatekeeper.sh/nsa-cisa-k8s-v1.2-host-network created
k8spsphostnetworkingports.constraints.gatekeeper.sh/nsa-cisa-k8s-v1.2-hostport created
k8spspprivilegedcontainer.constraints.gatekeeper.sh/nsa-cisa-k8s-v1.2-privileged-containers created
k8spspreadonlyrootfilesystem.constraints.gatekeeper.sh/nsa-cisa-k8s-v1.2-readonlyrootfilesystem created
k8spspselinuxv2.constraints.gatekeeper.sh/nsa-cisa-k8s-v1.2-selinux created
k8spspseccomp.constraints.gatekeeper.sh/nsa-cisa-k8s-v1.2-seccomp created
k8srequirenamespacenetworkpolicies.constraints.gatekeeper.sh/nsa-cisa-k8s-v1.2-require-namespace-network-policies created
k8srequiredresources.constraints.gatekeeper.sh/nsa-cisa-k8s-v1.2-cpu-and-memory-limits-required created
k8srestrictrolebindings.constraints.gatekeeper.sh/nsa-cisa-k8s-v1.2-restrict-clusteradmin-rolebindings created
k8srestrictrolebindings.constraints.gatekeeper.sh/nsa-cisa-k8s-v1.2-restrict-edit-rolebindings created
k8srestrictrolerules.constraints.gatekeeper.sh/nsa-cisa-k8s-v1.2-restrict-pods-exec created
- Verify that policy constraints have been installed and check if violations exist across the cluster:
kubectl get constraints -l bundleName=nsa-cisa-k8s-v1.2
The output is similar to the following:
NAME ENFORCEMENT-ACTION TOTAL-VIOLATIONS
k8sblockallingress.constraints.gatekeeper.sh/nsa-cisa-k8s-v1.2-block-all-ingress dryrun 0
NAME ENFORCEMENT-ACTION TOTAL-VIOLATIONS
k8sblockobjectsoftype.constraints.gatekeeper.sh/nsa-cisa-k8s-v1.2-block-secrets-of-type-basic-auth dryrun 0
NAME ENFORCEMENT-ACTION TOTAL-VIOLATIONS
k8spspallowedusers.constraints.gatekeeper.sh/nsa-cisa-k8s-v1.2-running-as-non-root dryrun 0
NAME ENFORCEMENT-ACTION TOTAL-VIOLATIONS
k8spspallowprivilegeescalationcontainer.constraints.gatekeeper.sh/nsa-cisa-k8s-v1.2-privilege-escalation dryrun 0
NAME ENFORCEMENT-ACTION TOTAL-VIOLATIONS
k8spspapparmor.constraints.gatekeeper.sh/nsa-cisa-k8s-v1.2-apparmor dryrun 0
NAME ENFORCEMENT-ACTION TOTAL-VIOLATIONS
k8spspautomountserviceaccounttokenpod.constraints.gatekeeper.sh/nsa-cisa-k8s-v1.2-automount-serviceaccount-token-pod dryrun 0
NAME ENFORCEMENT-ACTION TOTAL-VIOLATIONS
k8spspcapabilities.constraints.gatekeeper.sh/nsa-cisa-k8s-v1.2-capabilities dryrun 0
NAME ENFORCEMENT-ACTION TOTAL-VIOLATIONS
k8spsphostfilesystem.constraints.gatekeeper.sh/nsa-cisa-k8s-v1.2-restrict-hostpath-volumes dryrun 0
NAME ENFORCEMENT-ACTION TOTAL-VIOLATIONS
k8spsphostnamespace.constraints.gatekeeper.sh/nsa-cisa-k8s-v1.2-host-namespaces dryrun 0
NAME ENFORCEMENT-ACTION TOTAL-VIOLATIONS
k8spsphostnetworkingports.constraints.gatekeeper.sh/nsa-cisa-k8s-v1.2-host-network dryrun 0
k8spsphostnetworkingports.constraints.gatekeeper.sh/nsa-cisa-k8s-v1.2-hostport dryrun 0
NAME ENFORCEMENT-ACTION TOTAL-VIOLATIONS
k8spspprivilegedcontainer.constraints.gatekeeper.sh/nsa-cisa-k8s-v1.2-privileged-containers dryrun 0
NAME ENFORCEMENT-ACTION TOTAL-VIOLATIONS
k8spspreadonlyrootfilesystem.constraints.gatekeeper.sh/nsa-cisa-k8s-v1.2-readonlyrootfilesystem dryrun 0
NAME ENFORCEMENT-ACTION TOTAL-VIOLATIONS
k8spspseccomp.constraints.gatekeeper.sh/nsa-cisa-k8s-v1.2-seccomp dryrun 0
NAME ENFORCEMENT-ACTION TOTAL-VIOLATIONS
k8spspselinuxv2.constraints.gatekeeper.sh/nsa-cisa-k8s-v1.2-selinux dryrun 0
NAME ENFORCEMENT-ACTION TOTAL-VIOLATIONS
k8srequiredresources.constraints.gatekeeper.sh/nsa-cisa-k8s-v1.2-cpu-and-memory-limits-required dryrun 0
NAME ENFORCEMENT-ACTION TOTAL-VIOLATIONS
k8srequirenamespacenetworkpolicies.constraints.gatekeeper.sh/nsa-cisa-k8s-v1.2-require-namespace-network-policies dryrun 0
NAME ENFORCEMENT-ACTION TOTAL-VIOLATIONS
k8srestrictrolebindings.constraints.gatekeeper.sh/nsa-cisa-k8s-v1.2-restrict-clusteradmin-rolebindings dryrun 0
k8srestrictrolebindings.constraints.gatekeeper.sh/nsa-cisa-k8s-v1.2-restrict-edit-rolebindings dryrun 0
NAME ENFORCEMENT-ACTION TOTAL-VIOLATIONS
k8srestrictrolerules.constraints.gatekeeper.sh/nsa-cisa-k8s-v1.2-restrict-pods-exec dryrun 0
View policy violations
Once the policy constraints are installed in audit mode, violations on the cluster can be viewed in the UI using the Policy Controller Dashboard.
You can also use kubectl to view violations on the cluster using the following command:
kubectl get constraint -l bundleName=nsa-cisa-k8s-v1.2 -o json | jq -cC '.items[]| [.metadata.name,.status.totalViolations]'
If violations are present, a listing of the violation messages per constraint can be viewed with:
kubectl get constraint -l bundleName=nsa-cisa-k8s-v1.2 -o json | jq -C '.items[]| select(.status.totalViolations>0)| [.metadata.name,.status.violations[]?]'
Change NSA CISA Kubernetes Hardening v1.2 policy bundle enforcement action
Once you’ve reviewed policy violations on your cluster, you can consider changing the enforcement mode so the Admission Controller will either warn on or even deny block non-compliant resource from getting applied to the cluster.
[!WARNING] The deny enforcement action should be used with care as it can potentially block required changes resulting in interruption to critical workloads or the cluster. It is strongly recommended that only the
warnordryrunenforcement actions are used on clusters with production workloads, when testing new constraints, or performing migrations such as upgrading platforms. For more information about enforcement actions, see Auditing using constraints .
- Use kubectl to set the policies' enforcement action to
warn:
kubectl get constraints -l bundleName=nsa-cisa-k8s-v1.2 -o name | xargs -I {} kubectl patch {} --type='json' -p='[{"op":"replace","path":"/spec/enforcementAction","value":"warn"}]'
- Verify that policy constraints enforcement action have been updated:
kubectl get constraints -l bundleName=nsa-cisa-k8s-v1.2
Test policy enforcement
Create a non-compliant resource on the cluster using the following command:
cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: Pod
metadata:
namespace: default
name: wp-non-compliant
labels:
app: wordpress
spec:
containers:
- image: wordpress
name: wordpress
ports:
- containerPort: 80
name: wordpress
EOF
The admission controller should produce a warning listing out the policy violations that this resource violates, as shown in the following example:
Warning: [nsa-cisa-k8s-v1.2-automount-serviceaccount-token-pod] Automounting service account token is disallowed, pod: wp-non-compliant
Warning: [nsa-cisa-k8s-v1.2-running-as-non-root] Container wordpress is attempting to run without a required securityContext/runAsGroup. Allowed runAsGroup: {"ranges": [{"max": 65536, "min": 1000}], "rule": "MustRunAs"}
Warning: [nsa-cisa-k8s-v1.2-running-as-non-root] Container wordpress is attempting to run without a required securityContext/runAsUser
Warning: [nsa-cisa-k8s-v1.2-privilege-escalation] Privilege escalation container is not allowed: wordpress
Warning: [nsa-cisa-k8s-v1.2-cpu-and-memory-limits-required] container <wordpress> does not have <{"cpu", "memory"}> limits defined
Warning: [nsa-cisa-k8s-v1.2-capabilities] container <wordpress> is not dropping all required capabilities. Container must drop all of ["ALL"] or "ALL"
Warning: [nsa-cisa-k8s-v1.2-readonlyrootfilesystem] only read-only root filesystem container is allowed: wordpress
pod/wp-non-compliant created
Remove NSA CISA Kubernetes Hardening v1.2 policy bundle
If needed, the NSA CISA Kubernetes Hardening v1.2 policy bundle can be removed from the cluster.
- Use kubectl to remove the policies:
kubectl delete constraint -l bundleName=nsa-cisa-k8s-v1.2
Use PCI-DSS v4.0 policy constraints
Policy Controller comes with a default library of constraint templates that can be used with the PCI-DSS v4.0 bundle to evaluate the compliance of your cluster resources against some aspects of the Payment Card Industry Data Security Standard (PCI-DSS) v4.0 .
This page contains instructions for manually applying a policy bundle. Alternatively, you can apply policy bundles directly .
This page is for IT administrators and Operators who want to ensure that all resources running within the cloud platform meet organizational compliance requirements by providing and maintaining automation to audit or enforce.
Note: This bundle has not been certified by PCI.
PCI-DSS v4.0 policy bundle constraints
| Constraint Name | Constraint Description | Control IDs |
|---|---|---|
| pci-dss-v4.0-require-apps-annotations | Requires that all apps in the cluster have a network-controls/date annotation. | 2.2.5 |
| pci-dss-v4.0-require-av-daemonset | Requires the presence of an Anti-Virus DaemonSet. | 5.2.1, 5.2.2, 5.2.3, 5.3.1, 5.3.2, 5.3.5 |
| pci-dss-v4.0-require-default-deny-network-policies | Requires that every namespace defined in the cluster have a default deny NetworkPolicy for egress. | 1.3.2, 1.4.4 |
| pci-dss-v4.0-require-managed-by-label | Requires all apps have a valid app.kubernetes.io/managed-by label. | 1.2.8, 2.2.6, 5.3.5, 6.3.2, 6.5.1 |
| pci-dss-v4.0-require-namespace-network-policies | Requires that every Namespace defined in the cluster has a NetworkPolicy. | 1.2.5, 1.2.6, 1.4.1, 1.4.4 |
| pci-dss-v4.0-require-peer-authentication-strict-mtls | Ensures PeerAuthentications cannot overwrite strict mTLS. | 2.2.7, 4.2.1, 8.3.2 |
| pci-dss-v4.0-require-valid-network-ranges | Restricts CIDR ranges permitted for use with ingress and egress. | 1.3.1, 1.3.2, 1.4.2, 1.4.4 |
| pci-dss-v4.0-resources-have-required-labels | Requires all apps to contain a specified label to meet firewall requirements. | 1.2.7 |
| pci-dss-v4.0-restrict-cluster-admin-role | Restricts the use of the cluster-admin role. | 7.2.1, 7.2.2, 7.2.5, 8.2.4 |
| pci-dss-v4.0-restrict-creation-with-default-serviceaccount | Restricts the creation of resources using a default service account. Has no effect during audit. | 2.2.2 |
| pci-dss-v4.0-restrict-default-namespace | Restricts pods from using the default namespace. | 2.2.3 |
| pci-dss-v4.0-restrict-ingress | Restricts the creation of Ingress objects. | 1.3.1, 1.4.2, 1.4.4 |
| pci-dss-v4.0-restrict-node-image | Ensures consistent and correct time on Nodes by allowing only Container-Optimized OS or Ubuntu as the OS image. | 10.6.1, 10.6.2, 10.6.3 |
| pci-dss-v4.0-restrict-pods-exec | Restricts the use of pods/exec in Roles and ClusterRoles. | 8.6.1 |
| pci-dss-v4.0-restrict-rbac-subjects | Restricts the use of names in RBAC subjects to permitted values. | 7.3.2, 8.2.1, 8.2.2, 8.2.4 |
| pci-dss-v4.0-restrict-role-wildcards | Restricts the use of wildcards in Roles and ClusterRoles. | 7.3.3, 8.2.4 |
| pci-dss-v4.0-restrict-storageclass | Restricts StorageClass to a list of StorageClass which encrypt by default. | 3.3.2, 3.3.3 |
Before you begin
- Install Policy Controller on your cluster with the default library of constraint templates.
Configure your cluster’s workload for PCI-DSS v4.0
- All apps (
ReplicaSet,Deployment,StatefulSet,DaemonSet) must include anetwork-controls/dateannotation with the schema ofYYYY-MM-DD. - An antivirus solution is required. The default is the presence of a
daemonsetnamed clamav in theclamavNamespace, however thedaemonset’s name and namespace can be customized to your implementation in thepci-dss-v4.0-require-av-daemonsetconstraint. - Every
Namespacedefined in the cluster have a default denyNetworkPolicyfor egress, permitted exceptions can be specific inpci-dss-v4.0-require-namespace-network-policies. - Every
Namespacedefined in the cluster must have aNetworkPolicy. - If using Cloud Service Mesh, ASM PeerAuthentication must use strict mTLS
spec.mtls.mode: STRICT. - Only permitted IP ranges can be used for Ingress and Express, these can be specified in
pci-dss-v4.0-require-valid-network-ranges. - All apps (
ReplicaSet,Deployment,StatefulSet, andDaemonSet) must include apci-dss-firewall-audit labelwith the schema ofpci-dss-[0-9]{4}q[1-4]. - The use of the cluster-admin
ClusterRoleis not permitted. - Resources cannot be created using the default service account.
- The default
Namespacecannot be used for pods. - Only permitted Ingress objects (
Ingress,Gateway, andServicetypes ofNodePortandLoadBalancer) can be created, these can be specified inpci-dss-v4.0-restrict-ingress. - All nodes must use Ubuntu for their image for consistent time.
- The use of the wildcard character or the
pods/execpermission inRolesandClusterRolesis not permitted. - Only permitted subjects can be used in RBAC bindings, your domain name(s) can be specified in
pci-dss-v4.0-restrict-rbac-subjects. - The use of encrypt by default
StorageClassis required inpci-dss-v4.0-restrict-storageclass.
Audit PCI-DSS v4.0 policy bundle
Policy Controller lets you enforce policies for your Kubernetes cluster. To help test your workloads and their compliance with regard to the PCI-DSS v4.0 policies outlined in the preceding table, you can deploy these constraints in “audit” mode to reveal violations and more importantly give yourself a chance to fix them before enforcing on your Kubernetes cluster.
You can apply these policies with spec.enforcementAction set to dryrun using kubectl.
- (Optional) Preview the policy constraints with kubectl:
kubectl kustomize https://github.com/GoogleCloudPlatform/gke-policy-library.git/anthos-bundles/pci-dss-v4.0
- Apply the policy constraints with kubectl:
kubectl apply -k https://github.com/GoogleCloudPlatform/gke-policy-library.git/anthos-bundles/pci-dss-v4.0
The output is similar to the following: Click to expand output
asmpeerauthnstrictmtls.constraints.gatekeeper.sh/pci-dss-v4.0-require-peer-authentication-strict-mtls created
k8sblockallingress.constraints.gatekeeper.sh/pci-dss-v4.0-restrict-ingress created
k8sblockcreationwithdefaultserviceaccount.constraints.gatekeeper.sh/pci-dss-v4.0-restrict-creation-with-default-serviceaccount created
k8sprohibitrolewildcardaccess.constraints.gatekeeper.sh/pci-dss-v4.0-restrict-role-wildcards created
k8srequirecosnodeimage.constraints.gatekeeper.sh/pci-dss-v4.0-restrict-node-image created
k8srequiredaemonsets.constraints.gatekeeper.sh/pci-dss-v4.0-require-av-daemonset created
k8srequiredefaultdenyegresspolicy.constraints.gatekeeper.sh/pci-dss-v4.0-require-default-deny-network-policies created
k8srequirenamespacenetworkpolicies.constraints.gatekeeper.sh/pci-dss-v4.0-require-namespace-network-policies created
k8srequirevalidrangesfornetworks.constraints.gatekeeper.sh/pci-dss-v4.0-require-valid-network-ranges created
k8srequiredannotations.constraints.gatekeeper.sh/pci-dss-v4.0-require-apps-annotations created
k8srequiredlabels.constraints.gatekeeper.sh/pci-dss-v4.0-require-managed-by-label created
k8srequiredlabels.constraints.gatekeeper.sh/pci-dss-v4.0-resources-have-required-labels created
k8srestrictnamespaces.constraints.gatekeeper.sh/pci-dss-v4.0-restrict-default-namespace created
k8srestrictrbacsubjects.constraints.gatekeeper.sh/pci-dss-v4.0-restrict-rbac-subjects created
k8srestrictrolebindings.constraints.gatekeeper.sh/pci-dss-v4.0-restrict-cluster-admin-role created
k8srestrictrolerules.constraints.gatekeeper.sh/pci-dss-v4.0-restrict-pods-exec created
k8sstorageclass.constraints.gatekeeper.sh/pci-dss-v4.0-restrict-storageclass created
- Verify that policy constraints have been installed and check if violations exist across the cluster:
kubectl get constraints -l bundleName=pci-dss-v4.0
The output is similar to the following: Click to expand output
NAME ENFORCEMENT-ACTION TOTAL-VIOLATIONS
asmpeerauthnstrictmtls.constraints.gatekeeper.sh/pci-dss-v4.0-require-peer-authentication-strict-mtls dryrun 0
NAME ENFORCEMENT-ACTION TOTAL-VIOLATIONS
k8sblockallingress.constraints.gatekeeper.sh/pci-dss-v4.0-restrict-ingress dryrun 0
NAME ENFORCEMENT-ACTION TOTAL-VIOLATIONS
k8sblockcreationwithdefaultserviceaccount.constraints.gatekeeper.sh/pci-dss-v4.0-restrict-creation-with-default-serviceaccount dryrun 0
NAME ENFORCEMENT-ACTION TOTAL-VIOLATIONS
k8sprohibitrolewildcardaccess.constraints.gatekeeper.sh/pci-dss-v4.0-restrict-role-wildcards dryrun 0
NAME ENFORCEMENT-ACTION TOTAL-VIOLATIONS
k8srequirecosnodeimage.constraints.gatekeeper.sh/pci-dss-v4.0-restrict-node-image dryrun 0
NAME ENFORCEMENT-ACTION TOTAL-VIOLATIONS
k8srequiredaemonsets.constraints.gatekeeper.sh/pci-dss-v4.0-require-av-daemonset dryrun 0
NAME ENFORCEMENT-ACTION TOTAL-VIOLATIONS
k8srequiredannotations.constraints.gatekeeper.sh/pci-dss-v4.0-require-apps-annotations dryrun 0
NAME ENFORCEMENT-ACTION TOTAL-VIOLATIONS
k8srequiredefaultdenyegresspolicy.constraints.gatekeeper.sh/pci-dss-v4.0-require-default-deny-network-policies dryrun 0
NAME ENFORCEMENT-ACTION TOTAL-VIOLATIONS
k8srequiredlabels.constraints.gatekeeper.sh/pci-dss-v4.0-require-managed-by-label dryrun 0
k8srequiredlabels.constraints.gatekeeper.sh/pci-dss-v4.0-resources-have-required-labels dryrun 0
NAME ENFORCEMENT-ACTION TOTAL-VIOLATIONS
k8srequirenamespacenetworkpolicies.constraints.gatekeeper.sh/pci-dss-v4.0-require-namespace-network-policies dryrun 0
NAME ENFORCEMENT-ACTION TOTAL-VIOLATIONS
k8srequirevalidrangesfornetworks.constraints.gatekeeper.sh/pci-dss-v4.0-require-valid-network-ranges dryrun 0
NAME ENFORCEMENT-ACTION TOTAL-VIOLATIONS
k8srestrictnamespaces.constraints.gatekeeper.sh/pci-dss-v4.0-restrict-default-namespace dryrun 0
NAME ENFORCEMENT-ACTION TOTAL-VIOLATIONS
k8srestrictrbacsubjects.constraints.gatekeeper.sh/pci-dss-v4.0-restrict-rbac-subjects dryrun 0
NAME ENFORCEMENT-ACTION TOTAL-VIOLATIONS
k8srestrictrolebindings.constraints.gatekeeper.sh/pci-dss-v4.0-restrict-cluster-admin-role dryrun 0
NAME ENFORCEMENT-ACTION TOTAL-VIOLATIONS
k8srestrictrolerules.constraints.gatekeeper.sh/pci-dss-v4.0-restrict-pods-exec dryrun 0
NAME ENFORCEMENT-ACTION TOTAL-VIOLATIONS
k8sstorageclass.constraints.gatekeeper.sh/pci-dss-v4.0-restrict-storageclass dryrun 0
View policy violations
Once the policy constraints are installed in audit mode, violations on the cluster can be viewed in the UI using the Policy Controller Dashboard.
You can also use kubectl to view violations on the cluster using the following command:
kubectl get constraint -l bundleName=pci-dss-v4.0 -o json | jq -cC '.items[]| [.metadata.name,.status.totalViolations]'
If violations are present, a listing of the violation messages per constraint can be viewed with:
kubectl get constraint -l bundleName=pci-dss-v4.0 -o json | jq -C '.items[]| select(.status.totalViolations>0)| [.metadata.name,.status.violations[]?]'
Change PCI-DSS v4.0 policy bundle enforcement action
Once you’ve reviewed policy violations on your cluster, you can consider changing the enforcement mode so the Admission Controller will either warn on or even deny block non-compliant resource from getting applied to the cluster.
[!WARNING] The deny enforcement action should be used with care as it can potentially block required changes resulting in interruption to critical workloads or the cluster. It is strongly recommended that only the
warnordryrunenforcement actions are used on clusters with production workloads, when testing new constraints, or performing migrations such as upgrading platforms. For more information about enforcement actions, see Auditing using constraints .
- Use kubectl to set the policies' enforcement action to
warn:
kubectl get constraint -l bundleName=pci-dss-v4.0 -o name | xargs -I {} kubectl patch {} --type='json' -p='[{"op":"replace","path":"/spec/enforcementAction","value":"warn"}]'
- Verify that policy constraints enforcement action have been updated:
kubectl get constraint -l bundleName=pci-dss-v4.0
Test policy enforcement
Create a non-compliant resource on the cluster using the following command:
cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: Pod
metadata:
namespace: default
name: wp-non-compliant
labels:
app: wordpress
spec:
containers:
- image: wordpress
name: wordpress
ports:
- containerPort: 80
name: wordpress
EOF
The admission controller should produce a warning listing out the policy violations that this resource violates, as shown in the following example:
Warning: [pci-dss-v4.0-restrict-default-namespace] <default> namespace is restricted
pod/wp-non-compliant created
Remove PCI-DSS v4.0 policy bundle
If needed, the PCI-DSS v4.0 policy bundle can be removed from the cluster.
- Use kubectl to remove the policies:
kubectl delete constraint -l bundleName=pci-dss-v4.0
Use Pod Security Policy constraints
Policy Controller comes with a default library of constraint templates that can be used with the Pod Security Policy bundle to achieve many of the same protections as Kubernetes Pod Security Policy (PSP) , with the added ability to test your policies before enforcing them and exclude coverage of specific resources.
This page contains instructions for manually applying a policy bundle. Alternatively, you can apply policy bundles directly .
This page is for IT administrators and Operators who want to ensure that all resources running within the cloud platform meet organizational compliance requirements by providing and maintaining automation to audit or enforce.
The bundle includes these constraints which provide parameters which map to the following Kubernetes Pod Security Policy (PSP) Field Names (Control IDs):
| Constraint Name | Control ID | Type |
|---|---|---|
| psp-v2022-flexvolume-drivers | Allow specific FlexVolume drivers | allowedFlexVolumes |
| psp-v2022-psp-allow-privilege-escalation | Restricting escalation to root privileges | allowPrivilegeEscalation |
| psp-v2022-psp-apparmor | The AppArmor profile used by containers | annotations |
| psp-v2022-psp-capabilities | Linux capabilities | allowedCapabilities, requiredDropCapabilities |
| psp-v2022-psp-forbidden-sysctls | The sysctl profile used by containers | forbiddenSysctls |
| psp-v2022-psp-fsgroup | Allocating an FSGroup that owns the pod’s volumes | fsGroup |
| psp-v2022-psp-host-filesystem | Usage of the host filesystem | allowedHostPaths |
| psp-v2022-psp-host-namespace | Usage of host namespaces | hostPID, hostIPC |
| psp-v2022-psp-host-network-ports | Usage of host networking and ports | hostNetwork, hostPorts |
| psp-v2022-psp-pods-allowed-user-ranges | The user and group IDs of the container | runAsUser, runAsGroup, supplementalGroups, fsGroup |
| psp-v2022-psp-privileged-container | Running of privileged containers | privileged |
| psp-v2022-psp-proc-mount | The Allowed Proc Mount types for the container | allowedProcMountTypes |
| psp-v2022-psp-readonlyrootfilesystem | Requiring the use of a read-only root file system | readOnlyRootFilesystem |
| psp-v2022-psp-seccomp | The seccomp profile used by containers | annotations |
| psp-v2022-psp-selinux-v2 | The SELinux context of the container | seLinux |
| psp-v2022-psp-volume-types | Usage of volume types | volumes |
Before you begin
- Install Policy Controller on your cluster with the default library of constraint templates.
Audit Pod Security Policy policy bundle
Policy Controller lets you enforce policies for your Kubernetes cluster. To help test your workloads and their compliance with regard to the KOSMOS recommended best practices outlined in the preceding table, you can deploy these constraints in “audit” mode to reveal violations and more importantly give yourself a chance to fix them before enforcing on your Kubernetes cluster.
You can apply these policies with spec.enforcementAction set to dryrun using kubectl.
- (Optional) Preview the policy constraints with kubectl:
kubectl kustomize https://github.com/GoogleCloudPlatform/gke-policy-library.git/bundles/psp-v2022
- Apply the policy constraints with kubectl:
kubectl apply -k https://github.com/GoogleCloudPlatform/gke-policy-library.git/bundles/psp-v2022
The output is similar to the following: Click to expand output
k8spspallowprivilegeescalationcontainer.constraints.gatekeeper.sh/psp-v2022-psp-allow-privilege-escalation created
k8spspallowedusers.constraints.gatekeeper.sh/psp-v2022-psp-pods-allowed-user-ranges created
k8spspapparmor.constraints.gatekeeper.sh/psp-v2022-psp-apparmor created
k8spspcapabilities.constraints.gatekeeper.sh/psp-v2022-psp-capabilities created
k8spspfsgroup.constraints.gatekeeper.sh/psp-v2022-psp-fsgroup created
k8spspflexvolumes.constraints.gatekeeper.sh/psp-v2022-psp-flexvolume-drivers created
k8spspforbiddensysctls.constraints.gatekeeper.sh/psp-v2022-psp-forbidden-sysctls created
k8spsphostfilesystem.constraints.gatekeeper.sh/psp-v2022-psp-host-filesystem created
k8spsphostnamespace.constraints.gatekeeper.sh/psp-v2022-psp-host-namespace created
k8spsphostnetworkingports.constraints.gatekeeper.sh/psp-v2022-psp-host-network-ports created
k8spspprivilegedcontainer.constraints.gatekeeper.sh/psp-v2022-psp-privileged-container created
k8spspprocmount.constraints.gatekeeper.sh/psp-v2022-psp-proc-mount created
k8spspreadonlyrootfilesystem.constraints.gatekeeper.sh/psp-v2022-psp-readonlyrootfilesystem created
k8spspselinuxv2.constraints.gatekeeper.sh/psp-v2022-psp-selinux-v2 created
k8spspseccomp.constraints.gatekeeper.sh/psp-v2022-psp-seccomp created
k8spspvolumetypes.constraints.gatekeeper.sh/psp-v2022-psp-volume-types created
- Verify that policy constraints have been installed and check if violations exist across the cluster:
kubectl get -k https://github.com/GoogleCloudPlatform/gke-policy-library.git/bundles/psp-v2022
The output is similar to the following: Click to expand output
NAME ENFORCEMENT-ACTION TOTAL-VIOLATIONS
k8spspallowprivilegeescalationcontainer.constraints.gatekeeper.sh/psp-v2022-psp-allow-privilege-escalation dryrun 0
NAME ENFORCEMENT-ACTION TOTAL-VIOLATIONS
k8spspallowedusers.constraints.gatekeeper.sh/psp-v2022-psp-pods-allowed-user-ranges dryrun 0
NAME ENFORCEMENT-ACTION TOTAL-VIOLATIONS
k8spspapparmor.constraints.gatekeeper.sh/psp-v2022-psp-apparmor 0
NAME ENFORCEMENT-ACTION TOTAL-VIOLATIONS
k8spspcapabilities.constraints.gatekeeper.sh/psp-v2022-psp-capabilities dryrun 0
NAME ENFORCEMENT-ACTION TOTAL-VIOLATIONS
k8spspfsgroup.constraints.gatekeeper.sh/psp-v2022-psp-fsgroup 0
NAME ENFORCEMENT-ACTION TOTAL-VIOLATIONS
k8spspflexvolumes.constraints.gatekeeper.sh/psp-v2022-psp-flexvolume-drivers 0
NAME ENFORCEMENT-ACTION TOTAL-VIOLATIONS
k8spspforbiddensysctls.constraints.gatekeeper.sh/psp-v2022-psp-forbidden-sysctls 0
NAME ENFORCEMENT-ACTION TOTAL-VIOLATIONS
k8spsphostfilesystem.constraints.gatekeeper.sh/psp-v2022-psp-host-filesystem 0
NAME ENFORCEMENT-ACTION TOTAL-VIOLATIONS
k8spsphostnamespace.constraints.gatekeeper.sh/psp-v2022-psp-host-namespace dryrun 0
NAME ENFORCEMENT-ACTION TOTAL-VIOLATIONS
k8spsphostnetworkingports.constraints.gatekeeper.sh/psp-v2022-psp-host-network-ports dryrun 0
NAME ENFORCEMENT-ACTION TOTAL-VIOLATIONS
k8spspprivilegedcontainer.constraints.gatekeeper.sh/psp-v2022-psp-privileged-container dryrun 0
NAME ENFORCEMENT-ACTION TOTAL-VIOLATIONS
k8spspprocmount.constraints.gatekeeper.sh/psp-v2022-psp-proc-mount 0
NAME ENFORCEMENT-ACTION TOTAL-VIOLATIONS
k8spspreadonlyrootfilesystem.constraints.gatekeeper.sh/psp-v2022-psp-readonlyrootfilesystem 0
NAME ENFORCEMENT-ACTION TOTAL-VIOLATIONS
k8spspselinuxv2.constraints.gatekeeper.sh/psp-v2022-psp-selinux-v2 0
NAME ENFORCEMENT-ACTION TOTAL-VIOLATIONS
k8spspseccomp.constraints.gatekeeper.sh/psp-v2022-psp-seccomp dryrun 0
NAME ENFORCEMENT-ACTION TOTAL-VIOLATIONS
k8spspvolumetypes.constraints.gatekeeper.sh/psp-v2022-psp-volume-types 0
- (Optional) Adjust the PSP Field Name
parametersin the constraint files as required for your cluster environment. For more details check the link for the specific PSP Field Name in the table above. For example inpsp-host-network-ports:
parameters:
hostNetwork: true
min: 80
max: 9000
View policy violations
Once the policy constraints are installed in audit mode, violations on the cluster can be viewed in the UI using the Policy Controller Dashboard.
You can also use kubectl to view violations on the cluster using the following command:
kubectl get constraint -l policycontroller.gke.io/bundleName=psp-v2022 -o json | jq -cC '.items[]| [.metadata.name,.status.totalViolations]'
If violations are present, a listing of the violation messages per constraint can be viewed with:
kubectl get constraint -l policycontroller.gke.io/bundleName=psp-v2022 -o json | jq -C '.items[]| select(.status.totalViolations>0)| [.metadata.name,.status.violations[]?]'
Change Pod Security Policy policy bundle enforcement action
Once you’ve reviewed policy violations on your cluster, you can consider changing the enforcement mode so the Admission Controller will either warn on or even deny block non-compliant resource from getting applied to the cluster.
[!WARNING] The deny enforcement action should be used with care as it can potentially block required changes resulting in interruption to critical workloads or the cluster. It is strongly recommended that only the
warnordryrunenforcement actions are used on clusters with production workloads, when testing new constraints, or performing migrations such as upgrading platforms. For more information about enforcement actions, see Auditing using constraints .
- Use kubectl to set the policies' enforcement action to
warn:
kubectl get constraint -l bundleName=psp-v2022 -o name | xargs -I {} kubectl patch {} --type='json' -p='[{"op":"replace","path":"/spec/enforcementAction","value":"warn"}]'
- Verify that policy constraints enforcement action have been updated:
kubectl get constraint -l bundleName=psp-v2022
Test policy enforcement
Create a non-compliant resource on the cluster using the following command:
cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: Pod
metadata:
namespace: default
name: wp-non-compliant
labels:
app: wordpress
spec:
containers:
- image: wordpress
name: wordpress
ports:
- containerPort: 80
name: wordpress
EOF
The admission controller should produce a warning listing out the policy violations that this resource violates, as shown in the following example:
Warning: [psp-v2022-psp-pods-allowed-user-ranges] Container wordpress is attempting to run without a required securityContext/fsGroup. Allowed fsGroup: {"ranges": [{"max": 200, "min": 100}], "rule": "MustRunAs"}
Warning: [psp-v2022-psp-pods-allowed-user-ranges] Container wordpress is attempting to run without a required securityContext/runAsGroup. Allowed runAsGroup: {"ranges": [{"max": 200, "min": 100}], "rule": "MustRunAs"}
Warning: [psp-v2022-psp-pods-allowed-user-ranges] Container wordpress is attempting to run without a required securityContext/runAsUser
Warning: [psp-v2022-psp-pods-allowed-user-ranges] Container wordpress is attempting to run without a required securityContext/supplementalGroups. Allowed supplementalGroups: {"ranges": [{"max": 200, "min": 100}], "rule": "MustRunAs"}
Warning: [psp-v2022-psp-allow-privilege-escalation] Privilege escalation container is not allowed: wordpress
Warning: [psp-v2022-psp-seccomp] Seccomp profile 'not configured' is not allowed for container 'wordpress'. Found at: no explicit profile found. Allowed profiles: {"RuntimeDefault", "docker/default", "runtime/default"}
Warning: [psp-v2022-psp-capabilities] container <wordpress> is not dropping all required capabilities. Container must drop all of ["must_drop"] or "ALL"
Warning: [psp-v2022-psp-readonlyrootfilesystem] only read-only root filesystem container is allowed: wordpress
pod/wp-non-compliant created
Remove Pod Security Policy policy bundle
If needed, the Pod Security Policy policy bundle can be removed from the cluster.
- Use kubectl to remove the policies:
kubectl delete constraint -l bundleName=psp-v2022
Use Pod Security Standards Baseline policy constraints
Policy Controller comes with a default library of constraint templates that can be used with the Pod Security Standards Baseline bundle to achieve many of the same protections as Kubernetes Pod Security Standards (PSS) Baseline policy , with the added ability to test your policies before enforcing them and exclude coverage of specific resources.
This page contains instructions for manually applying a policy bundle. Alternatively, you can apply policy bundles directly .
This page is for IT administrators and Operators who want to ensure that all resources running within the cloud platform meet organizational compliance requirements by providing and maintaining automation to audit or enforce.
Pod Security Standards Baseline policy bundle constraints
| Constraint Name | Control Description | Type |
|---|---|---|
| pss-baseline-v2022-apparmor | The AppArmor profile used by containers | AppArmor |
| pss-baseline-v2022-capabilities | Linux capabilities | Capabilities |
| pss-baseline-v2022-host-namespaces-host-pid-ipc | Usage of host namespaces | Host Namespaces |
| pss-baseline-v2022-host-namespaces-hostnetwork | Use of host networking | |
| pss-baseline-v2022-host-ports | Usage of host ports | Host Ports (configurable) |
| pss-baseline-v2022-hostpath-volumes | Usage of the host filesystem | HostPath Volumes |
| pss-baseline-v2022-hostprocess | Usage of Windows HostProcess | HostProcess |
| pss-baseline-v2022-privileged-containers | Running of privileged containers | Privileged Containers |
| pss-baseline-v2022-proc-mount-type | The Allowed Proc Mount types for the container | /proc Mount Type |
| pss-baseline-v2022-seccomp | The seccomp profile used by containers | Seccomp |
| pss-baseline-v2022-selinux | The SELinux context of the container | SELinux |
| pss-baseline-v2022-sysctls | The sysctl profile used by containers | Sysctls |
Before you begin
- Install Policy Controller on your cluster with the default library of constraint templates.
Audit Pod Security Standards Baseline policy bundle
Policy Controller lets you enforce policies for your Kubernetes cluster. To help test your workloads and their compliance with regard to the KOSMOS recommended best practices outlined in the preceding table, you can deploy these constraints in “audit” mode to reveal violations and more importantly give yourself a chance to fix them before enforcing on your Kubernetes cluster.
You can apply these policies with spec.enforcementAction set to dryrun using kubectl.
- (Optional) Preview the policy constraints with kubectl:
kubectl kustomize https://github.com/GoogleCloudPlatform/gke-policy-library.git/bundles/pss-baseline-v2022
- Apply the policy constraints with kubectl:
kubectl apply -k https://github.com/GoogleCloudPlatform/gke-policy-library.git/bundles/pss-baseline-v2022
The output is similar to the following: Click to expand output
k8spspapparmor.constraints.gatekeeper.sh/pss-baseline-v2022-apparmor created
k8spspcapabilities.constraints.gatekeeper.sh/pss-baseline-v2022-capabilities created
k8spsphostfilesystem.constraints.gatekeeper.sh/pss-baseline-v2022-hostpath-volumes created
k8spsphostnamespace.constraints.gatekeeper.sh/pss-baseline-v2022-host-namespaces-host-pid-ipc created
k8spsphostnetworkingports.constraints.gatekeeper.sh/pss-baseline-v2022-host-namespaces-hostnetwork created
k8spsphostnetworkingports.constraints.gatekeeper.sh/pss-baseline-v2022-host-ports created
k8spspprivilegedcontainer.constraints.gatekeeper.sh/pss-baseline-v2022-privileged-containers created
k8spspprocmount.constraints.gatekeeper.sh/pss-baseline-v2022-proc-mount-type created
k8spspselinuxv2.constraints.gatekeeper.sh/pss-baseline-v2022-selinux created
k8spspseccomp.constraints.gatekeeper.sh/pss-baseline-v2022-seccomp created
k8spspforbiddensysctls.constraints.gatekeeper.sh/pss-baseline-v2022-sysctls created
- Verify that policy constraints have been installed and check if violations exist across the cluster:
kubectl get -k https://github.com/GoogleCloudPlatform/gke-policy-library.git/bundles/pss-baseline-v2022
The output is similar to the following: Click to expand output
NAME ENFORCEMENT-ACTION TOTAL-VIOLATIONS
k8spspapparmor.constraints.gatekeeper.sh/pss-baseline-v2022-apparmor 0
NAME ENFORCEMENT-ACTION TOTAL-VIOLATIONS
k8spspcapabilities.constraints.gatekeeper.sh/pss-baseline-v2022-capabilities dryrun 0
NAME ENFORCEMENT-ACTION TOTAL-VIOLATIONS
k8spsphostfilesystem.constraints.gatekeeper.sh/pss-baseline-v2022-hostpath-volumes 0
NAME ENFORCEMENT-ACTION TOTAL-VIOLATIONS
k8spsphostnamespace.constraints.gatekeeper.sh/pss-baseline-v2022-host-namespaces-host-pid-ipc dryrun 0
NAME ENFORCEMENT-ACTION TOTAL-VIOLATIONS
k8spsphostnetworkingports.constraints.gatekeeper.sh/pss-baseline-v2022-host-namespaces-hostnetwork dryrun 0
k8spsphostnetworkingports.constraints.gatekeeper.sh/pss-baseline-v2022-host-ports dryrun 0
NAME ENFORCEMENT-ACTION TOTAL-VIOLATIONS
k8spspprivilegedcontainer.constraints.gatekeeper.sh/pss-baseline-v2022-privileged-containers dryrun 0
NAME ENFORCEMENT-ACTION TOTAL-VIOLATIONS
k8spspprocmount.constraints.gatekeeper.sh/pss-baseline-v2022-proc-mount-type 0
NAME ENFORCEMENT-ACTION TOTAL-VIOLATIONS
k8spspselinuxv2.constraints.gatekeeper.sh/pss-baseline-v2022-selinux 0
NAME ENFORCEMENT-ACTION TOTAL-VIOLATIONS
k8spspseccomp.constraints.gatekeeper.sh/pss-baseline-v2022-seccomp dryrun 0
NAME ENFORCEMENT-ACTION TOTAL-VIOLATIONS
k8spspforbiddensysctls.constraints.gatekeeper.sh/pss-baseline-v2022-sysctls dryrun 0
- (Optional) Adjust the PSP Field Name
parametersin the constraint files as required for your cluster environment. For more details check the link for the specific PSP Field Name in the table above. For example inpsp-host-network-ports:
parameters:
# A minimum restricted known list can be implemented here.
min: 0
max: 0
View policy violations
Once the policy constraints are installed in audit mode, violations on the cluster can be viewed in the UI using the Policy Controller Dashboard.
You can also use kubectl to view violations on the cluster using the following command:
kubectl get constraint -l bundleName=pss-baseline-v2022 -o json | jq -cC '.items[]| [.metadata.name,.status.totalViolations]'
If violations are present, a listing of the violation messages per constraint can be viewed with:
kubectl get constraint -l bundleName=pss-baseline-v2022 -o json | jq -C '.items[]| select(.status.totalViolations>0)| [.metadata.name,.status.violations[]?]'
Change Pod Security Standards Baseline policy bundle enforcement action
Once you’ve reviewed policy violations on your cluster, you can consider changing the enforcement mode so the Admission Controller will either warn on or even deny block non-compliant resource from getting applied to the cluster.
[!WARNING] The deny enforcement action should be used with care as it can potentially block required changes resulting in interruption to critical workloads or the cluster. It is strongly recommended that only the
warnordryrunenforcement actions are used on clusters with production workloads, when testing new constraints, or performing migrations such as upgrading platforms. For more information about enforcement actions, see Auditing using constraints .
- Use kubectl to set the policies' enforcement action to
warn:
kubectl get constraint -l bundleName=pss-baseline-v2022 -o name | xargs -I {} kubectl patch {} --type='json' -p='[{"op":"replace","path":"/spec/enforcementAction","value":"warn"}]'
- Verify that policy constraints enforcement action have been updated:
kubectl get constraint -l bundleName=pss-baseline-v2022
Test policy enforcement
Create a non-compliant resource on the cluster using the following command:
cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: Pod
metadata:
namespace: default
name: wp-non-compliant
labels:
app: wordpress
spec:
containers:
- image: wordpress
name: wordpress
ports:
- containerPort: 80
hostPort: 80
name: wordpress
EOF
The admission controller should produce a warning listing out the policy violations that this resource violates, as shown in the following example:
Warning: [pss-baseline-v2022-host-ports] The specified hostNetwork and hostPort are not allowed, pod: wp-non-compliant. Allowed values: {"max": 0, "min": 0}
pod/wp-non-compliant created
Remove Pod Security Standards Baseline policy bundle
If needed, the Pod Security Standards Baseline policy bundle can be removed from the cluster.
- Use kubectl to remove the policies:
kubectl delete constraint -l bundleName=pss-baseline-v2022
Use Pod Security Standards Restricted policy constraints
Policy Controller comes with a default library of constraint templates that can be used with the Pod Security Standards Restricted bundle to achieve many of the same protections as Kubernetes Pod Security Standards (PSS) Restricted policy , with the added ability to test your policies before enforcing them and exclude coverage of specific resources.
This page contains instructions for manually applying a policy bundle. Alternatively, you can apply policy bundles directly .
This page is for IT administrators and Operators who want to ensure that all resources running within the cloud platform meet organizational compliance requirements by providing and maintaining automation to audit or enforce.
The bundle includes these constraints which map to the following Kubernetes Pod Security Standards (PSS) Restricted policy controls:
| Constraint Name | Control Description | Type |
|---|---|---|
| pss-restricted-v2022-capabilities | Linux capabilities | Capabilities |
| pss-restricted-v2022-privilege-escalation | Restricting escalation to root privileges | Privilege Escalation |
| pss-restricted-v2022-psp-volume-types | Usage of volume types | Volume Types |
| pss-restricted-v2022-running-as-non-root | The runAsNonRoot value of the container | Running as Non-root |
| pss-restricted-v2022-running-as-non-root-user | The user ID of the container | Running as Non-root user |
| pss-restricted-v2022-seccomp | The seccomp profile used by containers | Seccomp |
Before you begin
- Install Policy Controller on your cluster with the default library of constraint templates.
Audit Pod Security Standards Restricted policy bundle
Policy Controller lets you enforce policies for your Kubernetes cluster. To help test your workloads and their compliance with regard to the KOSMOS recommended best practices outlined in the preceding table, you can deploy these constraints in “audit” mode to reveal violations and more importantly give yourself a chance to fix them before enforcing on your Kubernetes cluster.
You can apply these policies with spec.enforcementAction set to dryrun using kubectl.
- (Optional) Preview the policy constraints with kubectl:
kubectl kustomize https://github.com/GoogleCloudPlatform/gke-policy-library.git/anthos-bundles/pss-restricted-v2022
- Apply the policy constraints with kubectl:
kubectl apply -k https://github.com/GoogleCloudPlatform/gke-policy-library.git/anthos-bundles/pss-restricted-v2022
The output is similar to the following: Click to expand output
k8spspallowprivilegeescalationcontainer.constraints.gatekeeper.sh/pss-restricted-v2022-privilege-escalation created
k8spspallowedusers.constraints.gatekeeper.sh/pss-restricted-v2022-running-as-non-root created
k8spspcapabilities.constraints.gatekeeper.sh/pss-restricted-v2022-capabilities created
k8spspseccomp.constraints.gatekeeper.sh/pss-restricted-v2022-seccomp created
k8spspvolumetypes.constraints.gatekeeper.sh/pss-restricted-v2022-psp-volume-types created
- Verify that policy constraints have been installed and check if violations exist across the cluster:
kubectl get -k https://github.com/GoogleCloudPlatform/gke-policy-library.git/anthos-bundles/pss-restricted-v2022
The output is similar to the following: Click to expand output
NAME ENFORCEMENT-ACTION TOTAL-VIOLATIONS
k8spspallowprivilegeescalationcontainer.constraints.gatekeeper.sh/pss-restricted-v2022-privilege-escalation dryrun 0
NAME ENFORCEMENT-ACTION TOTAL-VIOLATIONS
k8spspallowedusers.constraints.gatekeeper.sh/pss-restricted-v2022-running-as-non-root dryrun 0
NAME ENFORCEMENT-ACTION TOTAL-VIOLATIONS
k8spspcapabilities.constraints.gatekeeper.sh/pss-restricted-v2022-capabilities dryrun 0
NAME ENFORCEMENT-ACTION TOTAL-VIOLATIONS
k8spspseccomp.constraints.gatekeeper.sh/pss-restricted-v2022-seccomp dryrun 0
NAME ENFORCEMENT-ACTION TOTAL-VIOLATIONS
k8spspvolumetypes.constraints.gatekeeper.sh/pss-restricted-v2022-psp-volume-types dryrun 0
View policy violations
Once the policy constraints are installed in audit mode, violations on the cluster can be viewed in the UI using the Policy Controller Dashboard.
You can also use kubectl to view violations on the cluster using the following command:
kubectl get constraint -l bundleName=pss-restricted-v2022 -o json | jq -cC '.items[]| [.metadata.name,.status.totalViolations]'
If violations are present, a listing of the violation messages per constraint can be viewed with:
kubectl get constraint -l bundleName=pss-restricted-v2022 -o json | jq -C '.items[]| select(.status.totalViolations>0)| [.metadata.name,.status.violations[]?]'
Change Pod Security Standards Restricted policy bundle enforcement action
Once you’ve reviewed policy violations on your cluster, you can consider changing the enforcement mode so the Admission Controller will either warn on or even deny block non-compliant resource from getting applied to the cluster.
[!WARNING] The deny enforcement action should be used with care as it can potentially block required changes resulting in interruption to critical workloads or the cluster. It is strongly recommended that only the warn or dryrun enforcement actions are used on clusters with production workloads, when testing new constraints, or performing migrations such as upgrading platforms. For more information about enforcement actions, see
Auditing using constraints
.
- Use kubectl to set the policies' enforcement action to
warn:
kubectl get constraint -l bundleName=pss-restricted-v2022 -o name | xargs -I {} kubectl patch {} --type='json' -p='[{"op":"replace","path":"/spec/enforcementAction","value":"warn"}]'
- Verify that policy constraints enforcement action have been updated:
kubectl get constraint -l bundleName=pss-restricted-v2022
Test policy enforcement
Create a non-compliant resource on the cluster using the following command:
cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: Pod
metadata:
namespace: default
name: wp-non-compliant
labels:
app: wordpress
spec:
containers:
- image: wordpress
name: wordpress
ports:
- containerPort: 80
hostPort: 80
name: wordpress
EOF
The admission controller should produce a warning listing out the policy violations that this resource violates, as shown in the following example:
Warning: [pss-baseline-v2022-host-ports] The specified hostNetwork and hostPort are not allowed, pod: wp-non-compliant. Allowed values: {"max": 0, "min": 0}
pod/wp-non-compliant created
Remove Pod Security Standards Restricted policy bundle
If needed, the Pod Security Standards Restricted policy bundle can be removed from the cluster.
- Use kubectl to remove the policies:
kubectl delete constraint -l bundleName=pss-restricted-v2022
Policy Controller bundles
This page describes what Policy Controller bundles are and provides an overview of the available policy bundles.
This page is for IT administrators and Operators who want to ensure that all resources running within the cloud platform meet organizational compliance requirements by providing and maintaining automation to audit or enforce.
About Policy Controller bundles
You can use Policy Controller to apply individual constraints to your cluster or write your own custom policies. You can also use policy bundles, which let you audit your clusters without writing any constraints. Policy bundles are a group of constraints that can help apply best practices, meet industry standards, or solve regulatory problems across your cluster resources.
You can apply policy bundles to your existing clusters to check if your workloads are compliant. When you apply a policy bundle, it audits your cluster by applying constraints with the dryrun enforcement type. The dryrun enforcement type lets you see violations without blocking your workloads. It’s also recommended that only the warn or dryrun enforcement actions are used on clusters with production workloads, when testing new constraints, or performing migrations such as upgrading platforms. For more information about enforcement actions, see
Auditing using constraints
.
For example, one type of policy bundle is the CIS Kubernetes Benchmark bundle, which can help audit your cluster resources against the CIS Kubernetes Benchmark . This benchmark is a set of recommendations for configuring Kubernetes resources to support a strong security posture.
Available Policy Controller bundles
The following table lists the available policy bundles. Select the name of the policy bundle to read documentation on how to apply the bundle, audit resources, and enforce policies.
| Name and description | Bundle alias | Type |
|---|---|---|
| CIS Kubernetes Benchmark
: Audit compliance of your clusters against the CIS Kubernetes Benchmark v1.5, a set of recommendations for configuring Kubernetes to support a strong security posture. | cis-k8s-v1.5.1 | Kubernetes standard |
| Pod Security Policy
: Apply protections based on the Kubernetes Pod Security Policy (PSP). | psp-v2022 | Kubernetes standard |
| Pod Security Standards Baseline
: Apply protections based on the Kubernetes Pod Security Standards (PSS) Baseline policy. | pss-baseline-v2022 | Kubernetes standard |
| Pod Security Standards Restricted
: Apply protections based on the Kubernetes Pod Security Standards (PSS) Restricted policy. | pss-restricted-v2022 | Kubernetes standard |
| Policy Essentials
: Apply best practices to your cluster resources. | policy-essentials | Best practices |
| Samsung Security Checklist
: Apply best practices to conform Samsung Security Checklist items in your cluster resources. | samsung-security-checklist | Best practices |
| NIST SP 800-53 Rev. 5
: The NIST SP 800-53 Rev. 5 bundle implements controls listed in NIST Special Publication (SP) 800-53, Revision 5. The bundle may help organizations protect their systems and data from a variety of threats by implementing out-of-the-box security and privacy policies. | nist-sp-800-53-r5 | Industry standard |
| NIST SP 800-190
: The NIST SP 800-190 bundle implements controls listed in NIST Special Publication (SP) 800-190, Application Container Security Guide. The bundle is intended to help organizations with application container security including image security, container runtime security, network security and host system security to name a few. | nist-sp-800-190 | Industry standard |
| NSA CISA Kubernetes Hardening Guide v1.2
: Apply protections based on the NSA CISA Kubernetes Hardening Guide v1.2. | nsa-cisa-k8s-v1.2 | Industry standard |
| PCI-DSS v4.0
: Apply protections based on the Payment Card Industry Data Security Standard (PCI-DSS) v4.0. | pci-dss-v4.0 | Industry standard |
What’s next
- Learn more about applying individual constraints .
- Apply best practices to your clusters .
Use Policy Essentials policy constraints
Policy Controller comes with a default library of constraint templates that can be used with the Policy Essentials bundle to apply KOSMOS recommended best practices to your cluster resources.
This page contains instructions for manually applying a policy bundle. Alternatively, you can apply policy bundles directly .
This page is for IT administrators and Operators who want to ensure that all resources running within the cloud platform meet organizational compliance requirements by providing and maintaining automation to audit or enforce.
This bundle of constraints addresses and enforces policies in the following domains:
- RBAC and service accounts
- Pod Security Policies
- Container Network Interface (CNI)
- Secrets management
- General policies
Policy Essentials policy bundle constraints
| Constraint Name | Constraint Description |
|---|---|
| policy-essentials-no-secrets-as-env-vars | Prefer using Secrets as files over Secrets as environment variables |
| policy-essentials-pods-require-security-context | Apply Security Context to your Pods and containers |
| policy-essentials-prohibit-role-wildcard-access | Minimize the use of wildcards in Roles and ClusterRoles. |
| policy-essentials-psp-allow-privilege-escalation-container | Minimize the admission of containers with allowPrivilegeEscalation |
| policy-essentials-psp-capabilities | Containers must drop the NET_RAW capability and aren’t permitted to add back any capabilities. |
| policy-essentials-psp-host-namespace | Minimize the admission of containers with hostPID or hostIPC set to true. |
| policy-essentials-psp-host-network-ports | Minimize the admission of containers wanting to share the host network namespace |
| policy-essentials-psp-pods-must-run-as-nonroot | Minimize the admission of root containers |
| policy-essentials-psp-privileged-container | Minimize the admission of privileged containers |
| policy-essentials-psp-seccomp-default | Ensure that the seccomp profile is set to runtime/default or docker/default in your Pod definitions |
| policy-essentials-restrict-clusteradmin-rolebindings | Minimize the use of the cluster-admin role. |
Before you begin
- Install Policy Controller on your cluster with the default library of constraint templates.
Audit Policy Essentials policy bundle
Policy Controller lets you enforce policies for your Kubernetes cluster. To help test your workloads and their compliance with regard to the Google recommended best practices outlined in the preceding table, you can deploy these constraints in “audit” mode to reveal violations and more importantly give yourself a chance to fix them before enforcing on your Kubernetes cluster.
You can apply these policies with spec.enforcementAction set to dryrun using kubectl.
- (Optional) Preview the policy constraints with kubectl:
kubectl kustomize https://github.com/GoogleCloudPlatform/gke-policy-library.git/bundles/policy-essentials
- Apply the policy constraints with kubectl:
kubectl apply -k https://github.com/GoogleCloudPlatform/gke-policy-library.git/bundles/policy-essentials
The output is similar to the following: Click to expand output
k8snoenvvarsecrets.constraints.gatekeeper.sh/policy-essentials-no-secrets-as-env-vars created
k8spspallowprivilegeescalationcontainer.constraints.gatekeeper.sh/policy-essentials-psp-allow-privilege-escalation created
k8spspallowedusers.constraints.gatekeeper.sh/policy-essentials-psp-pods-must-run-as-nonroot created
k8spspcapabilities.constraints.gatekeeper.sh/policy-essentials-psp-capabilities created
k8spsphostnamespace.constraints.gatekeeper.sh/policy-essentials-psp-host-namespace created
k8spsphostnetworkingports.constraints.gatekeeper.sh/policy-essentials-psp-host-network-ports created
k8spspprivilegedcontainer.constraints.gatekeeper.sh/policy-essentials-psp-privileged-container created
k8spspseccomp.constraints.gatekeeper.sh/policy-essentials-psp-seccomp-default created
k8spodsrequiresecuritycontext.constraints.gatekeeper.sh/policy-essentials-pods-require-security-context created
k8sprohibitrolewildcardaccess.constraints.gatekeeper.sh/policy-essentials-prohibit-role-wildcard-access created
k8srestrictrolebindings.constraints.gatekeeper.sh/policy-essentials-restrict-clusteradmin-rolebindings created
- Verify that policy constraints have been installed and check if violations exist across the cluster:
kubectl get -k https://github.com/GoogleCloudPlatform/gke-policy-library.git/bundles/policy-essentials
The output is similar to the following: Click to expand output
NAME ENFORCEMENT-ACTION TOTAL-VIOLATIONS
k8snoenvvarsecrets.constraints.gatekeeper.sh/policy-essentials-no-secrets-as-env-vars dryrun 0
NAME ENFORCEMENT-ACTION TOTAL-VIOLATIONS
k8spspallowprivilegeescalationcontainer.constraints.gatekeeper.sh/policy-essentials-psp-allow-privilege-escalation dryrun 0
NAME ENFORCEMENT-ACTION TOTAL-VIOLATIONS
k8spspallowedusers.constraints.gatekeeper.sh/policy-essentials-psp-pods-must-run-as-nonroot dryrun 0
NAME ENFORCEMENT-ACTION TOTAL-VIOLATIONS
k8spspcapabilities.constraints.gatekeeper.sh/policy-essentials-psp-capabilities dryrun 0
NAME ENFORCEMENT-ACTION TOTAL-VIOLATIONS
k8spsphostnamespace.constraints.gatekeeper.sh/policy-essentials-psp-host-namespace dryrun 0
NAME ENFORCEMENT-ACTION TOTAL-VIOLATIONS
k8spsphostnetworkingports.constraints.gatekeeper.sh/policy-essentials-psp-host-network-ports dryrun 0
NAME ENFORCEMENT-ACTION TOTAL-VIOLATIONS
k8spspprivilegedcontainer.constraints.gatekeeper.sh/policy-essentials-psp-privileged-container dryrun 0
NAME ENFORCEMENT-ACTION TOTAL-VIOLATIONS
k8spspseccomp.constraints.gatekeeper.sh/policy-essentials-psp-seccomp-default dryrun 0
NAME ENFORCEMENT-ACTION TOTAL-VIOLATIONS
k8spodsrequiresecuritycontext.constraints.gatekeeper.sh/policy-essentials-pods-require-security-context dryrun 0
NAME ENFORCEMENT-ACTION TOTAL-VIOLATIONS
k8sprohibitrolewildcardaccess.constraints.gatekeeper.sh/policy-essentials-prohibit-role-wildcard-access dryrun 0
NAME ENFORCEMENT-ACTION TOTAL-VIOLATIONS
k8srestrictrolebindings.constraints.gatekeeper.sh/policy-essentials-restrict-clusteradmin-rolebindings dryrun 0
View policy violations
Once the policy constraints are installed in audit mode, violations on the cluster can be viewed in the UI using the Policy Controller Dashboard.
You can also use kubectl to view violations on the cluster using the following command:
kubectl get constraint -l bundleName=policy-essentials -o json | jq -cC '.items[]| [.metadata.name,.status.totalViolations]'
If violations are present, a listing of the violation messages per constraint can be viewed with:
kubectl get constraint -l bundleName=policy-essentials -o json | jq -C '.items[]| select(.status.totalViolations>0)| [.metadata.name,.status.violations[]?]'
Change Policy Essentials policy bundle enforcement action
Once you’ve reviewed policy violations on your cluster, you can consider changing the enforcement mode so the Admission Controller will either warn on or even deny block non-compliant resource from getting applied to the cluster.
[!WARNING] The deny enforcement action should be used with care as it can potentially block required changes resulting in interruption to critical workloads or the cluster. It is strongly recommended that only the
warnordryrunenforcement actions are used on clusters with production workloads, when testing new constraints, or performing migrations such as upgrading platforms. For more information about enforcement actions, see Auditing using constraints .
- Use kubectl to set the policies' enforcement action to
warn:
kubectl get constraint -l bundleName=policy-essentials -o name | xargs -I {} kubectl patch {} --type='json' -p='[{"op":"replace","path":"/spec/enforcementAction","value":"warn"}]'
- Verify that policy constraints enforcement action have been updated:
kubectl get constraint -l bundleName=policy-essentials
Test policy enforcement
Create a non-compliant resource on the cluster using the following command:
cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: Pod
metadata:
namespace: default
name: wp-non-compliant
labels:
app: wordpress
spec:
containers:
- image: wordpress
name: wordpress
ports:
- containerPort: 80
name: wordpress
EOF
The admission controller should produce a warning listing out the policy violations that this resource violates, as shown in the following example:
Warning: [policy-essentials-psp-capabilities] container <wordpress> is not dropping all required capabilities. Container must drop all of ["NET_RAW"] or "ALL"
pod/wp-non-compliant created
Remove Policy Essentials policy bundle
If needed, the Policy Essentials policy bundle can be removed from the cluster.
- Use kubectl to remove the policies:
kubectl delete constraint -l bundleName=policy-essentials
Use Samsung security checklist policy constraints
Policy Controller comes with a default library of constraint templates that can be used with the Samsung Security Checklist bundle to conform Samsung Security Checklist items to your cluster resources.
This page contains instructions for manually applying a policy bundle. Alternatively, you can apply policy bundles directly .
This page is for IT administrators and Operators who want to ensure that all resources running within the cloud platform meet organizational compliance requirements by providing and maintaining automation to audit or enforce.
This bundle of constraints addresses and enforces policies in the following domains:
- RBAC and service accounts
- Pod Security Policies
- Container Network Interface (CNI)
- Secrets management
- General policies
Samsung security checklist policy bundle constraints
| Constraint Name | Constraint Description | Cluster Type |
|---|---|---|
| samsung-security-checklist-app-gw-require-tls-version | Requires Azure Application Gateway should apply a cipher policy that allows TLSv1.2 or higher | AKS |
| samsung-security-checklist-lb-restrict-traffic-rules | Ensure that Security Group’s inbound / outbound rules comply the following Samsung Security Checklist requirements | AKS |
| samsung-security-checklist-aks-require-private-cluster | Enable a private cluster to restrict worker node to API access | AKS |
| samsung-security-checklist-aks-restrict-public-access-sources | Restricts public access sources for API server endpoints to prevent unrestricted access from the internet | AKS |
| samsung-security-checklist-alb-require-https-backend | Requires Application LoadBalancer (ALB) target groups are using HTTPS protocol to encrypt communication | EKS |
| samsung-security-checklist-alb-require-https-protocol | Requires Application LoadBalancer (ALB) should use only allow encrypted protocols (HTTPS:443) | EKS |
| samsung-security-checklist-alb-require-https-redirect | Requires Application LoadBalancer (ALB) should enable SSL Redirect and specifies the 443 port that redirects to | EKS |
| samsung-security-checklist-alb-require-tls-version | Requires Application LoadBalancer (ALB) should apply a cipher policy that allows TLSv1.2 or higher | EKS |
| samsung-security-checklist-alb-restrict-traffic-rules | Ensure that Security Group’s inbound / outbound rules comply the following Samsung Security Checklist requirements | EKS |
| samsung-security-checklist-nlb-require-tls-protocol | Requires Network & Classic LoadBalancer should use only allow encrypted protocols (TLS:443). | EKS |
| samsung-security-checklist-nlb-require-tls-version | Requires Network & Classic LoadBalancer should apply a cipher policy that allows TLSv1.2 or higher | EKS |
| samsung-security-checklist-nlb-restrict-traffic-rules | Ensure that Security Group’s inbound / outbound rules comply the following Samsung Security Checklist requirements | EKS |
| samsung-security-checklist-eks-disable-ssh-access | Disable SSH access into any nodegroups | EKS |
| samsung-security-checklist-eks-require-logging | Logging must be enabled to detect abnormal access to EKS cluster services and systems and provide audit records | EKS |
| samsung-security-checklist-eks-restrict-public-access-sources | Restricts public access sources for API server endpoints to prevent unrestricted access from the internet | EKS |
| samsung-security-checklist-alb-require-https-protocol | Requires Application LoadBalancer (ALB) should use only allow encrypted protocols (HTTPS:443) | GKE |
| samsung-security-checklist-alb-require-https-redirect | Requires Application LoadBalancer (ALB) should enable SSL Redirect and specifies the 443 port that redirects to | GKE |
| samsung-security-checklist-alb-require-tls-version | Requires Application LoadBalancer (ALB) should apply a cipher policy that allows TLSv1.2 or higher | GKE |
| samsung-security-checklist-gke-require-private-cluster | Enable a private cluster to restrict worker node to API access | GKE |
| samsung-security-checklist-gke-require-secrets-encryption | Protects your secrets in ETCD with a key you manage in Cloud KMS | GKE |
| samsung-security-checklist-gke-restrict-public-access-sources | Restricts public access sources for API server endpoints to prevent unrestricted access from the internet | GKE |
| samsung-security-checklist-alb-require-https-backend | Requires Application LoadBalancer (ALB) target groups are using HTTPS protocol to encrypt communication | MKS |
| samsung-security-checklist-alb-require-https-protocol | Requires Application LoadBalancer (ALB) should use only allow encrypted protocols (HTTPS:443) | MKS |
| samsung-security-checklist-alb-require-https-redirect | Requires Application LoadBalancer (ALB) should enable SSL Redirect and specifies the 443 port that redirects to | MKS |
| samsung-security-checklist-alb-require-tls-version | Requires Application LoadBalancer (ALB) should apply a cipher policy that allows TLSv1.2 or higher | MKS |
| samsung-security-checklist-alb-restrict-traffic-rules | Ensure that Security Group’s inbound / outbound rules comply the following Samsung Security Checklist requirements | MKS |
| samsung-security-checklist-nlb-require-tls-protocol | Requires Network & Classic LoadBalancer should use only allow encrypted protocols (TLS:443). | MKS |
| samsung-security-checklist-nlb-require-tls-version | Requires Network & Classic LoadBalancer should apply a cipher policy that allows TLSv1.2 or higher | MKS |
| samsung-security-checklist-nlb-restrict-traffic-rules | Ensure that Security Group’s inbound / outbound rules comply the following Samsung Security Checklist requirements | MKS |
| samsung-security-checklist-mks-disable-ssh-access | Disable SSH access into any nodegroups | MKS |
| samsung-security-checklist-mks-require-logging | Logging must be enabled to detect abnormal access to EKS cluster services and systems and provide audit records | MKS |
| samsung-security-checklist-mks-restrict-public-access-sources | Restricts public access sources for API server endpoints to prevent unrestricted access from the internet | MKS |
Before you begin
- Install Policy Controller on your cluster with the default library of constraint templates.
Audit Samsung security checklist policy bundle
Policy Controller lets you enforce policies for your Kubernetes cluster. To help test your workloads and their compliance with regard to the Samsung Security Checklist policies outlined in the preceding table, you can deploy these constraints in “audit” mode to reveal violations and more importantly give yourself a chance to fix them before enforcing on your Kubernetes cluster.
You can apply these policies with spec.enforcementAction set to dryrun using kubectl.
- (Optional) Preview the policy constraints with kubectl:
kubectl kustomize https://github.com/GoogleCloudPlatform/gke-policy-library.git/bundles/samsung-security-checklist
- Apply the policy constraints with kubectl:
kubectl apply -k https://github.com/GoogleCloudPlatform/gke-policy-library.git/bundles/samsung-security-checklist
The output is the following: Click to expand output
mksrequiredannotations.constraints.gatekeeper.sh/samsung-security-checklist-alb-require-https-backend created
mksrequiredannotations.constraints.gatekeeper.sh/samsung-security-checklist-alb-require-https-protocol created
mksrequiredannotations.constraints.gatekeeper.sh/samsung-security-checklist-alb-require-https-redirect created
mksrequiredannotations.constraints.gatekeeper.sh/samsung-security-checklist-alb-require-tls-version created
mksrequiredannotations.constraints.gatekeeper.sh/samsung-security-checklist-alb-restrict-traffic-rules created
mksrequiredannotations.constraints.gatekeeper.sh/samsung-security-checklist-nlb-require-tls-protocol created
mksrequiredannotations.constraints.gatekeeper.sh/samsung-security-checklist-nlb-require-tls-version created
mksrequiredannotations.constraints.gatekeeper.sh/samsung-security-checklist-nlb-restrict-traffic-rules created
- Verify that policy constraints have been installed and check if violations exist across the cluster:
kubectl get -k https://github.com/GoogleCloudPlatform/gke-policy-library.git/bundles/samsung-security-checklist
The output is similar to the following: Click to expand output
NAME ENFORCEMENT-ACTION TOTAL-VIOLATIONS
mksrequiredannotations.constraints.gatekeeper.sh/samsung-security-checklist-alb-require-https-backend dryrun 1
mksrequiredannotations.constraints.gatekeeper.sh/samsung-security-checklist-alb-require-https-protocol dryrun 1
mksrequiredannotations.constraints.gatekeeper.sh/samsung-security-checklist-alb-require-https-redirect dryrun 1
mksrequiredannotations.constraints.gatekeeper.sh/samsung-security-checklist-alb-require-tls-version dryrun 2
mksrequiredannotations.constraints.gatekeeper.sh/samsung-security-checklist-alb-restrict-traffic-rules dryrun 2
mksrequiredannotations.constraints.gatekeeper.sh/samsung-security-checklist-nlb-require-tls-protocol dryrun 0
mksrequiredannotations.constraints.gatekeeper.sh/samsung-security-checklist-nlb-require-tls-version dryrun 0
mksrequiredannotations.constraints.gatekeeper.sh/samsung-security-checklist-nlb-restrict-traffic-rules dryrun 0
View policy violations
Once the policy constraints are installed in audit mode, violations on the cluster can be viewed in the UI using the Policy Controller Dashboard.
You can also use kubectl to view violations on the cluster using the following command:
kubectl get constraint -l bundleName=samsung-security-checklist -o json | jq -cC '.items[]| [.metadata.name,.status.totalViolations]'
If violations are present, a listing of the violation messages per constraint can be viewed with:
kubectl get constraint -l bundleName=samsung-security-checklist -o json | jq -C '.items[]| select(.status.totalViolations>0)| [.metadata.name,.status.violations[]?]'
Change Samsung security checklist policy bundle enforcement action
Once you’ve reviewed policy violations on your cluster, you can consider changing the enforcement mode so the Admission Controller will either warn on or even deny block non-compliant resource from getting applied to the cluster.
[!WARNING] The deny enforcement action should be used with care as it can potentially block required changes resulting in interruption to critical workloads or the cluster. It is strongly recommended that only the
warnordryrunenforcement actions are used on clusters with production workloads, when testing new constraints, or performing migrations such as upgrading platforms. For more information about enforcement actions, see Auditing using constraints .
- Use kubectl to set the policies' enforcement action to
warn:
kubectl get constraint -l bundleName=samsung-security-checklist -o name | xargs -I {} kubectl patch {} --type='json' -p='[{"op":"replace","path":"/spec/enforcementAction","value":"warn"}]'
- Verify that policy constraints enforcement action have been updated:
kubectl get constraint -l bundleName=samsung-security-checklist
Test policy enforcement
Create a non-compliant resource on the cluster using the following command:
cat <<EOF | kubectl apply -f -
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: nginx
annotations:
alb.ingress.kubernetes.io/load-balancer-name: nginx
alb.ingress.kubernetes.io/scheme: internet-facing
alb.ingress.kubernetes.io/target-type: ip
alb.ingress.kubernetes.io/listen-ports: '[{"HTTPS": 443}]'
alb.ingress.kubernetes.io/ssl-redirect: "443"
spec:
ingressClassName: alb
rules:
- host: this.is.sample.host
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: nginx
port:
number: 443
EOF
The admission controller should produce a warning listing out the policy violations that this resource violates, as shown in the following example:
Warning: [samsung-security-checklist-samsung-security-checklist-alb-require-tls-version] container <wordpress> is not dropping all required capabilities. Application LoadBalancer (ALB) is not using TLSv1.2 or higher version.
ingress/nginx created
Remove Samsung security checklist policy bundle
If needed, the Samsung Security Checklist policy bundle can be removed from the cluster.
- Use kubectl to remove the policies:
kubectl delete constraint -l bundleName=samsung-security-checklist