Apply Policies
Auditing using constraints
Policy Controller constraint objects enable you to enforce policies for your Kubernetes clusters. To help test your policies, you can add an enforcement action to your constraints. You can then view violations in constraint objects and logs.
This page is for IT administrators and Operators who want to ensure that all resources running within the cloud platform meet organizational compliance requirements by providing and maintaining automation to audit or enforce, and who manage the lifecycle of the underlying tech infrastructure.
Types of enforcement actions
There are three enforcement actions: deny, dryrun, and warn.
deny is the default enforcement action. It’s automatically enabled, even if you don’t add an enforcement action in your constraint. Use deny to prevent a given cluster operation from occurring when there’s a violation.
dryrun lets you monitor violations of your rules without actively blocking transactions. You can use it to test if your constraints are working as intended, prior to enabling active enforcement using the deny action. Testing constraints this way can prevent disruptions caused by an incorrectly configured constraint.
warn is similar to dryrun, but also provides an immediate message about the violations that occur at admission time.
It’s recommended when testing new constraints or performing migration actions, like upgrading platforms, to switch enforcement actions from deny to warn or dryrun so that you can test that your policies work as expected.
Adding enforcement actions
You can add enforcementAction: deny or enforcementAction: dryrun to a constraint.
The following example constraint, named audit.yaml, adds the dryrun action.
#audit.yaml
apiVersion: constraints.gatekeeper.sh/v1beta1
kind: K8sPSPAllowedUsers
metadata:
name: user-must-be-3333
spec:
enforcementAction: dryrun
match:
kinds:
- apiGroups: [""]
kinds: ["Pod"]
parameters:
runAsUser:
rule: MustRunAs
ranges:
- min: 3333
max: 3333
Create the constraint. For example, apply it using kubectl apply -f:
kubectl apply -f audit.yaml
Viewing audit results
Audited violations are appended to the Constraint objects and are also written to the logs. Violations that the admission controller rejects do not appear in the logs.
Viewing audit results in constraint objects
To see violations of a given constraint, run the following command and view the spec.status fields.
kubectl get constraint-kind constraint-name -o yaml
Note: You can see a maximum of 40 violations in the output. When there are more than 20 results, you can view the complete list using logs.
Example
To see the output of the constraint from audit.yaml, run the following command:
kubectl get K8sPSPAllowedUsers user-must-be-3333 -o yaml
The output you see is similar to the following:
apiVersion: constraints.gatekeeper.sh/v1beta1
kind: K8sPSPAllowedUsers
metadata:
creationTimestamp: "2020-05-22T01:34:22Z"
generation: 1
name: user-must-be-3333
resourceVersion: "13351707"
selfLink: /apis/constraints.gatekeeper.sh/v1beta1/k8spspallowedusers/user-must-be-3333
uid: 5d0b39a8-9bcc-11ea-bb38-42010a80000c
spec:
enforcementAction: dryrun
match:
kinds:
- apiGroups:
- ""
kinds:
- Pod
parameters:
runAsUser:
ranges:
- max: 3333
min: 3333
rule: MustRunAs
status:
auditTimestamp: "2020-05-22T01:39:05Z"
byPod:
- enforced: true
id: gatekeeper-controller-manager-6b665d4c4d-lwnz5
observedGeneration: 1
totalViolations: 5
violations:
- enforcementAction: dryrun
kind: Pod
message: Container git-sync is attempting to run as disallowed user 65533
name: git-importer-86564db8cb-5r4gs
namespace: config-management-system
- enforcementAction: dryrun
kind: Pod
message: Container manager is attempting to run as disallowed user 1000
name: gatekeeper-controller-manager-6b665d4c4d-lwnz5
namespace: gatekeeper-system
- enforcementAction: dryrun
kind: Pod
message: Container kube-proxy is attempting to run without a required securityContext/runAsUser
name: kube-proxy-gke-fishy131-default-pool-7369b17c-cckf
namespace: kube-system
- enforcementAction: dryrun
kind: Pod
message: Container kube-proxy is attempting to run without a required securityContext/runAsUser
name: kube-proxy-gke-fishy131-default-pool-7369b17c-jnhb
namespace: kube-system
- enforcementAction: dryrun
kind: Pod
message: Container kube-proxy is attempting to run without a required securityContext/runAsUser
name: kube-proxy-gke-fishy131-default-pool-7369b17c-xrd8
namespace: kube-system
Viewing audit results in logs
To get all Policy Controller logs, run the following command:
kubectl logs -n kosmos-policysync -l gatekeeper.sh/system=yes
The audit results have "process":"audit" in the log lines, so you can pipe the output to another command and filter by these lines. For example, you could use jq, which parses JSON files and lets you set a filter for a specific log type.
Example audit result from logging:
{
"level": "info",
"ts": 1590111401.9769812,
"logger": "controller",
"msg": "Container kube-proxy is attempting to run without a required securityContext/runAsUser",
"process": "audit",
"audit_id": "2020-05-22T01:36:24Z",
"event_type": "violation_audited",
"constraint_kind": "K8sPSPAllowedUsers",
"constraint_name": "user-must-be-3333",
"constraint_namespace": "",
"constraint_action": "dryrun",
"resource_kind": "Pod",
"resource_namespace": "kube-system",
"resource_name": "kube-proxy-gke-fishy131-default-pool-7369b17c-xrd8"
}
What’s next
- Read more about Creating constraints
- Use the constraint template library
- Learn how to use constraints instead of PodSecurityPolicies
Use the constraint template library
This page shows you how to define Policy Controller constraints by using the pre-existing constraint templates provided by KOSMOS.
This page is for IT administrators and Operators who want to ensure that all resources running within the cloud platform meet organizational compliance requirements by providing and maintaining automation to audit or enforce, and use templating of declarative configuration.
Policy Controller lets you enforce policy for a Kubernetes cluster by defining one or more constraint objects. After a constraint is installed, requests to the API server are checked against the constraint and are rejected if they don’t comply. Pre-existing non-compliant resources are reported at audit time
Every constraint is backed by a constraint template that defines the schema and logic of the constraint. Constraint templates can be sourced from KOSMOS and third parties, or you can write your own. For more information about creating new templates, see Write a constraint template .
Before you begin
Examine the constraint template library
When you define a constraint, you specify the constraint template that it extends. A library of common constraint templates developed by KOSMOS is installed by default, and many organizations don’t need to create custom constraint templates directly in Rego. Constraint templates provided by KOSMOS have the label configmanagement.gke.io/configmanagement.
To list constraints, use the following command:
kubectl get constrainttemplates \
-l="kosmos.spcplatform.com/managed-by=kosmos-policy-operator"
To describe a constraint template and check its required parameters, use the following command:
kubectl describe constrainttemplate CONSTRAINT_TEMPLATE_NAME
You can also view all constraint templates in the library .
Define a constraint
You define a constraint by using YAML, and you don’t need to understand or write Rego. Instead, a constraint invokes a constraint template and provides it with parameters specific to the constraint.
# ns-must-have-geo.yaml
apiVersion: constraints.gatekeeper.sh/v1beta1
kind: K8sRequiredLabels
metadata:
name: ns-must-have-geo
spec:
match:
kinds:
- apiGroups: [""]
kinds: ["Namespace"]
parameters:
labels:
- key: "geo"
To create the constraint, use kubectl apply -f:
kubectl apply -f ns-must-have-geo.yaml
Audit a constraint
If the constraint is configured and installed correctly, its status.byPod[].enforced field is set to true, whether the constraint is configured to enforce or only test the constraint.
Constraints are enforced by default, and a violation of a constraint prevents a given cluster operation. You can set a constraint’s spec.enforcementAction to dryrun to report violations in the status.violations field without preventing the operation.
To learn more about auditing, see Audit using constraints .
Caveats when syncing constraints
If you’re syncing your constraints to a centralized source, like a Git repository, with Config, keep the following caveats in mind when syncing constraints.
Eventual consistency
You can commit constraints to a source of truth like a Git repository, and can limit their effects using ClusterSelectors or NamespaceSelectors. Because syncing is eventually consistent, keep the following caveats in mind:
- If a cluster operation triggers a constraint whose NamespaceSelector refers to a namespace that hasn’t been synced, the constraint is enforced and the operation is prevented. In other words, a missing namespace “fails closed.”
- If you change the labels of a namespace, the cache may contain outdated data for a brief time.
Minimize the need to rename a namespace or change its labels, and test constraints that impact a renamed or relabeled namespace to ensure they work as expected.
Configure Policy Controller for referential constraints
You can create a configuration that tells Policy Controller what kinds of objects to watch, such as namespaces.
Save the following YAML manifest to a file, and apply it with kubectl. The manifest configures Policy Controller to watch namespaces and Ingresses. Create an entry with group, version, and kind under spec.gvks, with the values for each type of object you want to watch.
apiVersion: syncset.gatekeeper.sh/v1alpha1
kind: SyncSet
metadata:
name: syncset-1
spec:
gvks:
- group: ""
version: "v1"
kind: "Namespace"
- group: ""
version: "v1"
kind: "Pod"
Referential constraints
Note: Before you enable referential constraints, you must create a configuration for the kosmos-policysync namespace .
A referential constraint references another object in its definition. For example, you could create a constraint that requires Ingress objects in a cluster to have unique hostnames. The constraint is referential if its constraint template contains the string data.inventory in its Rego.
Referential constraints are enabled by default if you install Policy Controller using the KOSMOS console. Referential constraints are only guaranteed to be eventually consistent, and this creates risks:
On an overloaded API server, the contents of Policy Controller’s cache may become stale, causing a referential constraint to “fail open”, meaning that the enforcement action appears to be working when it isn’t. For example, you can create Ingresses with duplicate hostnames too quickly to allow the admission controller to detect the duplicates.
The order in which constraints are installed and the order in which the cache is updated are both random.
List all constraints
To list all constraints installed on a cluster, use the following command:
kubectl get constraint
You can also see an overview of your applied constraints in the KOSMOS Cloud console.
Remove a constraint
To find all constraints that use a constraint template, use the following command to list all objects with the same kind as the constraint template’s metadata.name:
kubectl get CONSTRAINT_TEMPLATE_NAME
To remove a constraint, specify its kind and name:
kubectl delete CONSTRAINT_TEMPLATE_NAME CONSTRAINT_NAME
When you remove a constraint, it stops being enforced as soon as the API server marks the constraint as deleted.
IMPORTANT: Records of violations of the constraint are stored in the constraint’s object as persisted in the cluster, and cannot be retrieved after the constraint is removed.
Remove all constraint templates
Note: When you remove all constraint templates, all constraints that use those templates are also removed. Constraints must be backed by constraint templates.
To disable the constraint template library, complete the following steps:
- In the KOSMOS console, go to the Policy page under the Fleet section.
- Under the Settings tab, in the cluster table, select Edit in the Edit configuration column.
- In the Add/Edit policy bundles menu, toggle the template library and all policy bundles off.
- Select Save changes.
Restore the constraint template library
To enable the constraint template library, complete the following steps:
- In the KOSMOS console, go to the Policy page under the Fleet section.
- Under the Settings tab, in the cluster table, select Edit in the Edit configuration column.
- In the Add/Edit policy bundles menu, toggle the template library on. You can also enable any or all of the policy bundles.
- Select Save changes.
What’s next
- Learn about Policy Controller bundles .
- View the constraint template library reference documentation.
- Learn how to create custom constraints .
- Troubleshoot Policy Controller .
Write a custom constraint template
This page shows you how to write a custom constraint template and use it to extend Policy Controller if you cannot find a pre-written constraint template that suits your needs.
This page is for IT administrators and Operators who want to ensure that all resources running within the cloud platform meet organizational compliance requirements by providing and maintaining automation to audit or enforce, and use templating of declarative configuration.
Policy Controller policies are described by using the OPA Constraint Framework and are written in Rego . A policy can evaluate any field of a Kubernetes object.
Writing policies using Rego is a specialized skill. For this reason, a library of common constraint templates is installed by default. You can likely invoke these constraint templates when creating constraints. If you have specialized needs, you can create your own constraint templates.
Constraint templates let you separate a policy’s logic from its specific requirements, for reuse and delegation. You can create constraints by using constraint templates developed by third parties, such as open source projects, software vendors, or regulatory experts.
Before you begin
Example constraint template
Following is an example constraint template that denies all resources whose name matches a value provided by the creator of the constraint. The rest of this page discusses the contents of the template, highlighting important concepts along the way.
apiVersion: templates.gatekeeper.sh/v1beta1
kind: ConstraintTemplate
metadata:
name: k8sdenyname
spec:
crd:
spec:
names:
kind: K8sDenyName
validation:
# Schema for the `parameters` field
openAPIV3Schema:
properties:
invalidName:
type: string
targets:
- target: admission.k8s.gatekeeper.sh
rego: |
package k8sdenynames
violation[{"msg": msg}] {
input.review.object.metadata.name == input.parameters.invalidName
msg := sprintf("The name %v is not allowed", [input.parameters.invalidName])
}
Example constraint
Following is an example constraint that you might implement to deny all resources named policy-violation:
apiVersion: constraints.gatekeeper.sh/v1beta1
kind: K8sDenyName
metadata:
name: no-policy-violation
spec:
parameters:
invalidName: "policy-violation"
Parts of a constraint template
Constraint templates have two important pieces:
- The schema of the constraint that you want users to create. The schema of a constraint template is stored in the
crdfield. - The Rego source code that is executed when the constraint is evaluated. The Rego source code for a template is stored in the
targetsfield.
Schema (crd field)
The CRD field is a blueprint for creating the Kubernetes Custom Resource Definition that defines the constraint resource for the Kubernetes API server. You only need to populate the following fields.
| Field | Description ** |
|---|---|
| spec.crd.spec.names.kind | The Kind of the constraint. When lower-cased, the value of this field must be equal to metadata.name. |
| spec.crd.spec.validation.openAPIV3Schema | The schema for the spec.parameters field of the constraint resource (Policy Controller automatically defines the rest of the constraint’s schema). It follows the same conventions as it would in a regular CRD resource. |
Prefixing the constraint template with the name K8s is a convention that lets you avoid collisions with other kinds of constraint templates, such as Forseti templates that target KOSMOS resources.
Rego source code (targets field)
The following sections provide you with more information about the Rego source code.
Location
The Rego source code is stored under the spec.targets field, where targets is an array of objects of the following format:
{"target": "admission.k8s.gatekeeper.sh","rego": REGO_SOURCE_CODE, "libs": LIST_OF_REGO_LIBRARIES}
target: tells Policy Controller what system we are looking at (in this case Kubernetes); only one entry intargetsis allowed.rego: the source code for the constraint.libs: an optional list of libraries of Rego code that is made available to the constraint template; it is meant to make it easier to use shared libraries and is out of scope for this document.
Source code
Following is the Rego source code for the preceding constraint:
package k8sdenynames
violation[{"msg": msg}] {
input.review.object.metadata.name == input.parameters.invalidName
msg := sprintf("The name %v is not allowed", [input.parameters.invalidName])
}
Note the following:
package k8sdenynamesis required by OPA (Rego’s runtime). The value is ignored.- The Rego rule that Policy Controller invokes to see if there are any violations is called
violation. If this rule has matches, a violation of the constraint has occurred. - The
violationrule has the signatureviolation[{"msg": "violation message for the user"}], where the value of"msg"is the violation message that is returned to the user. - The parameters provided to the constraint are made available under the keyword
input.parameters. - The
request-under-testis stored under the keywordinput.review.
The keyword input.review has the following fields.
| Field | Description |
|---|---|
| uid | The unique ID for this particular request; it is not available during audit. |
| kind | The Kind information for the object-under-test. It has the following format: kind: the resource kind group: the resource group version: the resource version |
| name | The resource name. It might be empty if the user is relying on the API server to generate the name on a CREATE request. |
| namespace | The resource namespace (not provided for cluster-scoped resources). |
| operation | The operation requested (for example, CREATE or UPDATE); it is not available during audit. |
| userInfo | The requesting user’s information; it is not available during audit. It has the following format: username: the user making the request uid: the user’s UID groups: a list of groups that the user is a member of extra: any extra user information provided by Kubernetes |
| object | The object that the user is attempting to modify or create. |
| oldObject | The original state of the object; it is only available on UPDATE operations. |
| dryRun | Whether this request was invoked with kubectl –dry-run; it is not available during audit. |
Install your constraint template
After you’ve created your constraint template, use kubectl apply to apply it, and Policy Controller takes care of ingesting it. Be sure to check the status field of your constraint template to make sure that there were no errors instantiating it. On successful ingestion, the status field should show created: true and the observedGeneration noted in the status field should equal the metadata.generation field.
After the template is ingested, you can apply constraints for it as described in Creating constraints .
Remove a constraint template
[!CAUTION] When you remove a constraint template that is in use by constraints, Policy Controller removes constraints that depend on that template.
To remove a constraint template, complete the following steps:
- Verify that no constraints that you want to preserve are using the constraint template:
kubectl get TEMPLATE_NAME
If there’s a naming conflict between the constraint template’s name and a different object in the cluster, use the following command instead:
kubectl get TEMPLATE_NAME.constraints.gatekeeper.sh
- Remove the constraint template:
kubectl delete constrainttemplate CONSTRAINT_TEMPLATE_NAME
When you remove a constraint template, you can no longer create constraints that reference it.
What’s next
- Learn more about Policy Controller .
- View the constraint template library reference documentation.
- Learn how to use constraints instead of PodSecurityPolicies .