Vault installation
Overview
HashiCorp Vault provides secure storage, and tight control over access to tokens, passwords, certificates, encryption keys for protecting secrets, and other sensitive data and makes them accessible via UI, CLI, or HTTP API.
Kosmos users can install Vault using Kosmos provided Vault AppTemplate. This relies on the publicly available vault-helm helm chart. When installing Vault in a target cluster, users can utilize this App. The AppTemplate allows you to provide configuration in three ways:
No configuration provided. This leads to using default values and install Vault in dev mode.
Configure parameters of your interest. This will override default values for those parameters.
Configure using parameter which accepts whole set of configurations as raw yaml (should abide to the schema by vault-helm chart values.yaml)
Prerequisites
Before beginning the installation process, ensure the following requirements are met:
Cluster Access
- Fleet Cluster or
- DevSpace vCluster access
Namespace
- When working with Fleets, you must create a namespace.
- On vcluster, the namespace gets created automatically.
Installation process using CLI
Step 1: Login to Kosmos
Login to the Kosmos console using the CLI:
kosmos login console.kosmos.spcplatform.com --access-key <YOUR_ACCESS_KEY>
Example output:
Successfully logged into Kosmos instance https://console.kosmos.spcplatform.com
Verify the logged-in user:
kosmos get currentuser
Step 2: Verify existing Kosmos app
List available application templates:
kosmos list apps
Ensure the vault app template exists.
Verify using:
kosmos get app --name vault
Example output:
NAME DISPLAY NAME DESCRIPTION
vault Vault Application template for installing Vault Helm chart.
Step 3: Check customizable parameters
Retrieve configurable parameters for the application:
kosmos get app --name vault -o json | jq '[.spec.parameters[] | {variable: .variable, type: .type, description: .description}]'
For the list of parameters and configurable items, see below
Step 4: Configure application parameters
If none configurations are provided, Vault will install with default parameters. On the other extreme, to have a granular level of configuration, one can use vault_helm_values_raw to provide yaml, which is used verbatim as values.yaml
Example helm override
global:
enabled: true
namespace: "vault"
imagePullSecrets:
- name: image-pull-secret
tlsDisable: true
psp:
enable: false
annotations: |
seccomp.security.alpha.kubernetes.io/allowedProfileNames: docker/default,runtime/default
apparmor.security.beta.kubernetes.io/allowedProfileNames: runtime/default
seccomp.security.alpha.kubernetes.io/defaultProfileName: runtime/default
apparmor.security.beta.kubernetes.io/defaultProfileName: runtime/default
serverTelemetry:
prometheusOperator: false
image:
repository: "hashicorp/vault"
tag: "1.21.2"
pullPolicy: IfNotPresent
updateStrategyType: "OnDelete"
resources:
requests:
memory: 256Mi
cpu: 250m
limits:
memory: 256Mi
cpu: 250m
Create parameter file
Create a parameter file with format parameter_variable1: <value1>\nparameter_variable2: <value2>\n
vault-params.yaml
Example1: The following example sets vault_helm_values_raw parameter, the multiline value will be used as helm chart values override.
vault_helm_values_raw: |-
global:
enabled: true
namespace: vault
server:
standalone:
enabled: true
dataStorage:
enabled: true
size: 10Gi
mountPath: "/vault/data"
Example 2: The following example overrides some of the values, and the unspecified will use default values.
server:
enabled: true
resources:
requests:
memory: 256Mi
cpu: 250m
limits:
memory: 256Mi
cpu: 250m
Step 5: Install the application
Deploy the application using the Kosmos CLI:
kosmos install app \
--name vault \
--parameter-file vault-params.yaml \
--release-name vault \
--target-cluster mks-test-vault \
--fleet qe-fleet \
--target-namespace vault
Example output:
Successfully installed App 'vault'
Installation process using Kosmos Management Console
Step 1: Access the Kosmos console
Navigate to the Kosmos Management Console .

Log in with your credentials.

Step 2: Navigate to application
Go to your selected cluster and click the App/Helm tab.
- Click Install App
- Find and select Vault from the template list

Step 3: Configure parameters
- Default Install: If one does not specify any parameters, the AppTemplate installs Vault helm chart with default values.
- Partial Overrides: For a partial override of Vault values, use the specifics parameter input fields provided.
- Full Override: If Kosmos user prefers to provide, the content for values.yaml, he can do so using
vault_helm_values_rawparameter.
An example of configuring vault_helm_values_raw parameter is below:
Once installed, one should see Vault app installed in “Apps” tab
Parameter usage and configuration
Vault consists of parameters required when installing the Vault through the Kosmos application.
Teardown and Cleanup
When you would not require Vault in the target cluster, navigate to apps, select “Vault” installed app and click “uninstall”.

Successfully deleted Helm Release {"component": "task-runner", "namespace": "vault", "name": "vault"}
Parameters
| Name | Type | Required | Default Value | Description |
|---|---|---|---|---|
| vault_helm_values_raw | multiline | ❌ | A yaml formatted input, which is used as values.yaml for vault-helm chart to install this App.\nIf provided, only this value will be used as yaml input and rest of the field inputs will be ignored.\nMust be in yaml format, and match the vault-helm chart values.yaml\nexample:\nglobal:\n enabled: true\n namespace: vault\nserver:\n standalone:\n enabled: true\n dataStorage:\n enabled: true\n size: 10Gi\n mountPath: “/vault/data”\n | |
| global_enabled | boolean | ❌ | true | Enable deployment of Vault components. |
| namespace | string | ❌ | vault | Namespace for vault resources |
| global_image_pull_secrets | string | ❌ | Image pull secret to use for registry authentication.\nAlternatively, the value may be specified as an array of strings.\nexample: image-pull-secret1,image-pull-secret2 | |
| disable_tls | boolean | ❌ | true | TLS for end-to-end encrypted transport |
| global_external_vault_addr | string | ❌ | External vault server address for the injector and CSI provider to use.\nSetting this will disable deployment of a vault server.\nexample: https://myvaultserver:8200 | |
| global_openshift | boolean | ❌ | false | Deploy to openshift |
| global_psp_enabled | boolean | ❌ | false | Create PodSecurityPolicy for pods |
| global_psp_annotations | multiline | ❌ | Annotations for PodSecurityPolicy. Input should be valid json map.\nexample:\nannotations:\n vaultproject.io/psp: ‘privileged’ | |
| serverTelemetry_prometheusOperator | boolean | ❌ | false | Enable integration with the Prometheus Operator\nSee the top level serverTelemetry section below before enabling this feature. |
| injector_enabled | string | ❌ | - | Enable deployment of the Vault Agent Injector component. |
| injector_replicas | number | ❌ | 1 | Number of replicas for the Vault Agent Injector deployment. |
| injector_port | number | ❌ | 8080 | Port for the Vault Agent Injector to listen on. |
| injector_leader_elector_enabled | boolean | ❌ | false | If multiple replicas are specified, by default a leader will be determined\nso that only one injector attempts to create TLS certificates. |
| injector_metrics_enabled | boolean | ❌ | false | If true, will enable a node exporter metrics endpoint at /metrics. |
| injector_image_repository | string | ❌ | hashicorp/vault-k8s Repository for vault-k8s image used for the injector. | |
| injector_image_tag | string | ❌ | 1.7.2 Tag of the vault-k8s image to use for the injector. | |
| injector_agent_image_repository | string | ❌ | hashicorp/vault AgentImage sets the repo and tag of the Vault image to use for the Vault Agent\ncontainers. This should be set to the official Vault image. Vault 1.3.1+ is\nrequired. | |
| injector_agent_image_tag | string | ❌ | 1.21.2 Tag of the Vault image to use for the Vault Agent containers. | |
| injector_agent_defaults_cpu_limit | string | ❌ | 500m The default values for the injected Vault Agent containers. | |
| injector_agent_defaults_cpu_request | string | ❌ | 250m The default values for the injected Vault Agent containers. | |
| injector_agent_defaults_mem_limit | string | ❌ | 128Mi The default values for the injected Vault Agent containers. | |
| injector_agent_defaults_mem_request | string | ❌ | 64Mi The default values for the injected Vault Agent containers. | |
| injector_agent_defaults_ephemeral_limit | string | ❌ | 128Mi The default values for the injected Vault Agent containers. | |
| injector_agent_defaults_ephemeral_request | string | ❌ | 64Mi The default values for the injected Vault Agent containers. | |
| injector_agent_defaults_template | string | ❌ | map Default template type for secrets when no custom template is specified.\nPossible values include: ‘json’ and ‘map’. | |
| injector_agent_defaults_template_config_exitonretryfailure | boolean | ❌ | false | Default value for the exit_on_retry_failure field in the template configuration for the injected Vault Agent containers.\nThis field controls whether the Vault Agent should exit if it encounters an error when trying to render a template and retry until it succeeds, or if it should keep retrying without exiting. |
| injector_agent_defaults_template_config_staticsecretrenderinterval | string | ❌ | Agent default template config staticSecretRenderInterval.\nThis field controls the interval at which the Vault Agent should render static secrets. | |
| injector_auth_path | string | ❌ | auth/kubernetes The path to authenticate to Vault for the Vault Agent Injector. This should be set to the path of the Kubernetes auth method configured in Vault. | |
| injector_log_level | string | ❌ | info Configure the logging verbosity for the Vault Agent Injector.\nSupported log levels include: trace, debug, info, warn, error | |
| injector_log_format | string | ❌ | standard Configure the logging format for the Vault Agent Injector.\nSupported log formats include: json, standard | |
| injector_revoke_on_shutdown | boolean | ❌ | false | Configures all Vault Agent sidecars to revoke their token when shutting down. |
| server_enabled | boolean | ❌ | true | If true, or ‘-’ with global.enabled true, Vault server will be installed. |
| server_image_repository | string | ❌ | hashicorp/vault image sets the repo and tag of the vault image to use for the server. | |
| server_image_tag | string | ❌ | 1.21.2 Tag of the vault image to use for the server. | |
| server_log_level | string | ❌ | Configure the logging verbosity for the Vault server.\nSupported log levels include: trace, debug, info, warn, error | |
| server_log_format | string | ❌ | Configure the logging format for the Vault server.\nSupported log formats include: json, standard. | |
| server_resources | multiline | ❌ | Resource requests, limits, etc. for the server cluster placement. This\nshould map directly to the value of the resources field for a PodSpec.\nBy default no direct resource request is made.\nexample:\nresources:\n requests:\n memory: 256Mi\n cpu: 250m\n limits:\n memory: 256Mi\n cpu: 250m | |
| server_ingress_enabled | boolean | ❌ | false | Enable vault ingress. Allows ingress services to be created to allow external access\nfrom Kubernetes to access Vault pods.\nIn order to expose the service, use the route section below |
| server_ingress_labels | multiline | ❌ | Labels for the Vault Server ingress. Allows ingress services to be created to allow external access\nfrom Kubernetes to access Vault pods.\nIf deployment is on OpenShift, the following block is ignored.\nIn order to expose the service, use the route section below\nexample:\nlabels:\n ingress-label1: label-val1\n ingress-label2: label-val2 | |
| server_ingress_annotations | multiline | ❌ | Annotations for the Vault Server ingress. Allows ingress services to be created to allow external access\nfrom Kubernetes to access Vault pods.\nIf deployment is on OpenShift, the following block is ignored.\nIn order to expose the service, use the route section below\nexample:\nannotations:\n kubernetes.io/ingress.class: nginx\n kubernetes.io/tls-acme: ‘true’ | |
| server_ingress_ingress_class_name | string | ❌ | Ingress class name for the Vault Server ingress.\nThis is used to specify the ingress class to use for the ingress resources created for Vault.\nThis is an alternative to specifying the ingress class through annotations. | |
| server_ingress_path_type | string | ❌ | Prefix Ingress path type for the Vault Server ingress.\nThis is used to specify the path type to use for the ingress resources created for Vault.\nSupported values include: ImplementationSpecific, Exact, Prefix | |
| server_ingress_active_service | boolean | ❌ | true | When HA mode is enabled and K8s service registration is being used,\nconfigure the ingress to point to the Vault active service. |
| server_ingress_hosts | multiline | ❌ | The hosts to use for the Vault Server ingress rules when using HA. \nThis is used to specify the hosts to use for the Vault Server ingress rules when using HA.\nThis should be set to the hostnames that will be used to access the active vault instance through the ingress,\nwhich is typically the main vault service when using the helm chart in HA mode.\nexample:\nhosts:\n - host: chart-example.local\n paths: [] | |
| server_ingress_extra_paths | multiline | ❌ | Extra paths to use for the Vault Server ingress rules when using HA. \nThis is used to specify extra paths to use for the Vault Server ingress rules when using HA.\nThis should be set to any extra paths that will be used to access the active vault instance through the ingress,\nwhich is typically the main vault service when using the helm chart in HA mode.\nexample:\nextraPaths:\n - path: /\n backend:\n service:\n name: ssl-redirect\n port:\n number: use-annotation\n - path: /\n backend:\n service:\n name: ssl-redirect\n port:\n number: use-annotation | |
| server_ingress_tls | multiline | ❌ | TLS settings to use for the Vault Server ingress rules when using HA. \nThis is used to specify TLS settings to use for the Vault Server ingress rules when using HA.\nThis should be set to any TLS settings that will be used to access the active vault instance through the ingress,\nwhich is typically the main vault service when using the helm chart in HA mode.\nexample:\ntls:\n - secretName: vault-tls\n hosts:\n - chart-example.local | |
| server_ingress_host_aliases | multiline | ❌ | hostAliases is a list of aliases to be added to /etc/hosts. Specified as a YAML list.\nexample:\nhostAliases:\n - ip: ‘127.0.0.1’\n hostnames:\n - ‘example.local’ | |
| server_auth_delegator_enabled | boolean | ❌ | true | AuthDelegator enables a cluster role binding to be attached to the service\naccount. This cluster role binding can be used to setup Kubernetes auth\nmethod. See https://developer.hashicorp.com/vault/docs/auth/kubernetes |
| server_extra_init_containers | multiline | ❌ | extraInitContainers is a list of init containers. Specified as a YAML list.\nThis is useful if you need to run a script to provision TLS certificates or\nwrite out configuration files in a dynamic way.\nexample:\nextraInitContainers:\n - name: my-init-container\n image: busybox\n command: [‘sh’, ‘-c’, ‘echo Hello from the init container! && sleep 5’]\n args:\n - cd /tmp &&\n wget https://github.com/puppetlabs/vault-plugin-secrets-oauthapp/releases/download/v1.2.0/vault-plugin-secrets-oauthapp-v1.2.0-linux-amd64.tar.xz -O oauthapp.xz &&\n tar -xf oauthapp.xz &&\n mv vault-plugin-secrets-oauthapp-v1.2.0-linux-amd64 /usr/local/libexec/vault/oauthapp &&\n chmod +x /usr/local/libexec/vault/oauthapp\n volumeMounts:\n - name: plugins\n mountPath: /usr/local/libexec/vault | |
| server_extra_containers | multiline | ❌ | extraContainers is a list of additional containers to add to the Vault server statefulSet. Specified as a YAML list.\nThis is useful if you need to run a script to provision TLS certificates or\nwrite out configuration files in a dynamic way.\nexample:\nextraContainers:\n - name: my-extra-container\n image: busybox\n command: [‘sh’, ‘-c’, ‘echo Hello from the extra container! && sleep 5’]\n args:\n - cd /tmp &&\n wget https://github.com/puppetlabs/vault-plugin-secrets-oauthapp/releases/download/v1.2.0/vault-plugin-secrets-oauthapp-v1.2.0-linux-amd64.tar.xz -O oauthapp.xz &&\n tar -xf oauthapp.xz &&\n mv vault-plugin-secrets-oauthapp-v1.2.0-linux-amd64 /usr/local/libexec/vault/oauthapp &&\n chmod +x /usr/local/libexec/vault/oauthapp\n volumeMounts:\n - name: plugins\n mountPath: /usr/local/libexec/vault | |
| server_share_process_namespace | boolean | ❌ | false | |
| server_extra_args | multiline | ❌ | extraArgs is a string containing additional Vault server arguments. | |
| server_extra_ports | multiline | ❌ | extraPorts is a list of extra ports. Specified as a YAML list.\nThis is useful if you need to add additional ports to the statefulset in dynamic way. \nexample:\nextraPorts:\n - containerPort: 8300\n name: http-monitoring | |
| server_termination_grace_period_seconds | number | ❌ | 10 | Optional duration in seconds the pod needs to terminate gracefully.\nSee: https://kubernetes.io/docs/concepts/containers/container-lifecycle-hooks/ |
| server_pre_stop_sleep_seconds | number | ❌ | 5 | Used to set the sleep time during the preStop step, if custom preStop\ncommands are not set. |
| server_pre_stop_commands | multiline | ❌ | Used to define custom preStop exec commands to run before the pod is\nterminated. If not set, this will default to:\nexample:\npreStop:\n - ‘/bin/sh’\n - ‘-c’\n - ‘sleep {{ .Values.server.preStopSleepSeconds }} && kill -SIGTERM $(pidof vault)’ | |
| server_post_start_commands | multiline | ❌ | This can be used to automate processes such as initialization\nor boostrapping auth methods.\nexample:\npostStart:\n - /bin/sh\n - -c\n - /vault/userconfig/myscript/run.sh | |
| server_extra_environment_vars | multiline | ❌ | extraEnvironmentVars is a list of extra environment variables to set with the stateful set.\nThese could be used to include variables required for auto-unseal.\nexample:\nextraEnvironmentVars:\n GOOGLE_REGION: global\n GOOGLE_PROJECT: myproject\n GOOGLE_APPLICATION_CREDENTIALS: /vault/userconfig/myproject/myproject-creds.json | |
| server_extra_secret_environment_vars | multiline | ❌ | extraSecretEnvironmentVars is a list of extra environment variables to set with the stateful set.\nThese variables take value from existing Secret objects.\nexample:\nextraSecretEnvironmentVars:\n - envName: AWS_SECRET_ACCESS_KEY\n secretName: vault\n secretKey: AWS_SECRET_ACCESS_KEY | |
| server_volumes | multiline | ❌ | volumes is a list of volumes made available to all containers. These are rendered\nvia toYaml rather than pre-processed like the extraVolumes value.\nThe purpose is to make it easy to share volumes between containers.\nexample:\nvolumes:\n - name: plugins\n emptyDir: {} | |
| server_volume_mounts | multiline | ❌ | volumeMounts is a list of volumeMounts for the main server container. These are rendered\nvia toYaml rather than pre-processed like the extraVolumeMounts value.\nexample:\nvolumeMounts:\n - mountPath: /usr/local/libexec/vault\n name: plugins\n readOnly: true | |
| server_affinity | multiline | ❌ | Affinity Settings\n Commenting out or setting as empty the affinity variable, will allow\n deployment to single node services such as Minikube\n This should be YAML matching the PodSpec’s affinity field.\nexample:\naffinity: | |
| server_topology_spread_constraints | multiline | ❌ | Topology settings for server pods\nref: https://kubernetes.io/docs/concepts/workloads/pods/pod-topology-spread-constraints/\nThis should be either a multi-line string or YAML matching the topologySpreadConstraints array\nin a PodSpec.\nexample:\ntopologySpreadConstraints: [] | |
| server_tolerations | multiline | ❌ | Toleration Settings for server pods\nThis should be either a multi-line string or YAML matching the Toleration array\nin a PodSpec.\nexample:\ntolerations: [] | |
| server_node_selector | multiline | ❌ | nodeSelector labels for server pod assignment, formatted as YAML map.\nref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#nodeselector\nexample:\nnodeSelector:\n beta.kubernetes.io/arch: amd64 | |
| server_network_policy_enabled | boolean | ❌ | false | Enables network policy for server pods |
| server_network_policy_egress | multiline | ❌ | Egress rules for network policy for server pods, formatted as YAML list.\nref: https://kubernetes.io/docs/concepts/services-networking/network-policies/#egress-rules\nexample:\negress:\n- to:\n - ipBlock:\n cidr: 10.0.0.0/24\n ports:\n - protocol: TCP\n port: 443 | |
| server_network_policy_ingress | multiline | ❌ | Ingress rules for network policy for server pods, formatted as YAML list.\nref: https://kubernetes.io/docs/concepts/services-networking/network-policies/#ingress-rules\nexample:\ningress:\n- from:\n - namespaceSelector: {}\n ports:\n - port: 8200\n protocol: TCP\n - port: 8201\n protocol: TCP | |
| server_priority_class_name | string | ❌ | Priority class for server pods\nref: https://kubernetes.io/docs/concepts/configuration/pod-priority-preemption/#priorityclass | |
| server_extra_labels | multiline | ❌ | Extra labels to attach to the server pods\nThis should be a YAML map of the labels to apply to the server pods\nexample:\nextraLabels: {} | |
| server_annotations | multiline | ❌ | Extra annotations to attach to the server pods\nThis can either be YAML or a YAML-formatted multi-line templated string map\nof the annotations to apply to the server pods | |
| server_include_config_annotation | boolean | ❌ | false | Add an annotation to the server configmap and the statefulset pods,\nvaultproject.io/config-checksum, that is a hash of the Vault configuration.\nThis can be used together with an OnDelete deployment strategy to help\nidentify which pods still need to be deleted during a deployment to pick up\nany configuration changes. |
| server_service_enabled | boolean | ❌ | true | |
| server_service_active_enabled | boolean | ❌ | true | Enable or disable the vault-active service, which selects Vault pods that\nhave labeled themselves as the cluster leader with vault-active: 'true'. |
| server_service_active_annotations | multiline | ❌ | Extra annotations for the service definition.\nThis can either be json or yaml map of the annotations to apply\nto the active service. | |
| server_service_standby_enabled | boolean | ❌ | true | |
| server_service_standby_annotations | multiline | ❌ | Extra annotations for the service definition. This can either be YAML or a\nYAML-formatted multi-line templated string map of the annotations to apply\nto the standby service. | |
| server_service_instance_selector_enabled | boolean | ❌ | true | |
| server_service_cluster_ip | string | ❌ | clusterIP controls whether a Cluster IP address is attached to the\nVault service within Kubernetes. By default, the Vault service will\nbe given a Cluster IP address, set to None to disable. When disabled\nKubernetes will create a ‘headless’ service. Headless services can be\nused to communicate with pods directly through DNS instead of a round-robin\nload balancer. | |
| server_service_type | string | ❌ | ClusterIP | Configures the service type for the main Vault service. Can be ClusterIP or NodePort. |
| server_service_ip_family_policy | string | ❌ | The IP family and IP families options are to set the behaviour in a dual-stack environment.\nOmitting these values will let the service fall back to whatever the CNI dictates the defaults\nshould be.These are only supported for kubernetes versions >=1.23.0\nConfigures the service’s supported IP family policy, can be either:\nSingleStack: Single-stack service. The control plane allocates a cluster IP for the Service, using the first configured service cluster IP range.\nPreferDualStack: Allocates IPv4 and IPv6 cluster IPs for the Service.\nRequireDualStack: Allocates Service .spec.ClusterIPs from both IPv4 and IPv6 address ranges. | |
| server_service_ip_families | multiline | ❌ | Sets the families that should be supported and the order in which they should be applied to ClusterIP as well.\nCan be IPv4 and/or IPv6. | |
| server_service_publish_not_ready_addresses | boolean | ❌ | true | Do not wait for pods to be ready before including them in the services'\ntargets. Does not apply to the headless service, which is used for\ncluster-internal communication. |
| server_service_external_traffic_policy | string | ❌ | ClusterThe externalTrafficPolicy can be set to either Cluster or Local\nand is only valid for LoadBalancer and NodePort service types.\nThe default value is Cluster.\nref: https://kubernetes.io/docs/concepts/services-networking/service/#external-traffic-policy | |
| server_service_node_port | number | ❌ | 0 | If type is set to ‘NodePort’, a specific nodePort value can be configured,\nwill be random if left blank. |
| server_service_active_node_port | number | ❌ | 0 | When HA mode is enabled\nIf type is set to ‘NodePort’, a specific nodePort value can be configured,\nwill be random if left blank. |
| server_service_standby_node_port | number | ❌ | 0 | When HA mode is enabled\nIf type is set to ‘NodePort’, a specific nodePort value can be configured,\nwill be random if left blank. |
| server_service_port | number | ❌ | Port on which Vault server is listening. | |
| server_service_target_port | number | ❌ | Target port to which the service should be mapped to. | |
| server_service_annotations | multiline | ❌ | Extra annotations for the service definition.\nThis can either be json or yaml map of the annotations to apply to the main vault service. | |
| server_data_storage_enabled | boolean | ❌ | true | This configures the Vault Statefulset to create a PVC for data\nstorage when using the file or raft backend storage engines.\nSee https://developer.hashicorp.com/vault/docs/configuration/storage to know more |
| server_data_storage_size | string | ❌ | Size of the PVC created. | |
| server_data_storage_mountPath | string | ❌ | /vault/data | Location where the PVC will be mounted. |
| server_data_storage_storageClass | string | ❌ | Name of the storage class to use. If null it will use the\nconfigured default Storage Class. | |
| server_data_storage_accessMode | string | ❌ | ReadWriteOnce Access Mode of the storage device being used for the PVC. | |
| server_data_storage_annotations | multiline | ❌ | Annotations to apply to the PVC.\nexample:\nannotations:\n vaultproject.io/annotation-key: annotation-value | |
| server_data_storage_labels | multiline | ❌ | Labels to apply to the PVC.\nexample:\nlabels:\n vaultproject.io/label-key: label-value | |
| server_persistent_volume_claim_retention_policy | multiline | ❌ | Persistent Volume Claim (PVC) retention policy\nref: https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/#persistentvolumeclaim-retention\nExample:\npersistentVolumeClaimRetentionPolicy:\n whenDeleted: Retain\n whenScaled: Retain | |
| server_audit_storage_enabled | boolean | ❌ | false | This configures the Vault Statefulset to create a PVC for audit\nstorage when using the file backend storage engine for audit logs.\n See https://developer.hashicorp.com/vault/docs/audit to know more |
| server_audit_storage_size | string | ❌ | Size of the PVC created for audit storage. | |
| server_audit_storage_mountPath | string | ❌ | /vault/audit | Location where the PVC for audit storage will be mounted. |
| server_audit_storage_storageClass | string | ❌ | Name of the storage class to use for audit storage. If null it will use the\nconfigured default Storage Class. | |
| server_audit_storage_accessMode | string | ❌ | ReadWriteOnce | Access Mode of the storage device being used for the PVC for audit storage. |
| server_audit_storage_annotations | multiline | ❌ | Annotations to apply to the PVC for audit storage. | |
| server_audit_storage_labels | multiline | ❌ | Labels to apply to the PVC for audit storage. | |
| server_dev_enabled | boolean | ❌ | false | |
| server_dev_root_token | string | ❌ | root The root token to use when running in dev mode. Ignored if not running in dev mode.\nSet VAULT_DEV_ROOT_TOKEN_ID value | |
| server_standalone_enabled | string | ❌ | Run Vault in ‘standalone’ mode. This is the default mode and should be used for production deployments.\nIn this mode, Vault will manage its own storage and HA (if enabled) using the configured storage backend.\nSee https://developer.hashicorp.com/vault/docs/concepts/ha to know more | |
| server_standalone_config | multiline | ❌ | config is a raw string of default configuration when using a Stateful\ndeployment. Default is to use a PersistentVolumeClaim mounted at /vault/data\nand store data there. This is only used when using a Replica count of 1, and\nusing a stateful set. Supported formats are HCL and JSON.\n\nNote: Configuration files are stored in ConfigMaps so sensitive data\nsuch as passwords should be either mounted through extraSecretEnvironmentVars\nor through a Kube secret. For more information see:\nhttps://developer.hashicorp.com/vault/docs/platform/k8s/helm/run#protecting-sensitive-vault-configurations\nexample:\nui = true\nlistener ‘tcp’ {\n tls_disable = 1\n address = ‘[::]:8200’\n cluster_address = ‘[::]:8201’\n # Enable unauthenticated metrics access (necessary for Prometheus Operator)\n #telemetry {\n # unauthenticated_metrics_access = ‘true’\n #}\n}\nstorage ‘file’ {\n path = ‘/vault/data’\n}\n# Example configuration for using auto-unseal, using Google Cloud KMS. The\n# GKMS keys must already exist, and the cluster must have a service account\n# that is authorized to access GCP KMS.\nseal ‘gcpckms’ {\n project = ‘vault-helm-dev’\n region = ‘global’\n key_ring = ‘vault-helm-unseal-kr’\n crypto_key = ‘vault-helm-unseal-key’\n}\n# Example configuration for enabling Prometheus metrics in your config.\ntelemetry {\n prometheus_retention_time = ’30s'\n disable_hostname = true\n} | |
| server_ha_enabled | boolean | ❌ | false | |
| server_ha_replicas | number | ❌ | 3 | |
| server_ha_api_addr | string | ❌ | The api_addr configuration for Vault HA\nSee https://developer.hashicorp.com/vault/docs/configuration#api_addr\nIf set to null, this will be set to the Pod IP Address | |
| server_ha_cluster_addr | string | ❌ | The cluster_addr configuration for Vault HA\nSee https://developer.hashicorp.com/vault/docs/configuration#cluster_addr\nIf set to null, defaults to https://$(HOSTNAME).{{ template ‘vault.fullname’ . }}-internal:8201 | |
| server_ha_raft_enabled | boolean | ❌ | false | Run Vault in ‘HA Raft’ mode. This is an alternative to using Consul for HA storage and does not require an external storage backend. This is only available in Vault Enterprise. |
| server_ha_raft_set_node_id | boolean | ❌ | false | Set the node ID for Vault HA Raft mode. This is only used if HA Raft mode is enabled. |
| server_ha_raft_config | multiline | ❌ | Note: Configuration files are stored in ConfigMaps so sensitive data\nsuch as passwords should be either mounted through extraSecretEnvironmentVars\nor through a Kube secret. For more information see:\nhttps://developer.hashicorp.com/vault/docs/platform/k8s/helm/run#protecting-sensitive-vault-configurations\nSupported formats are HCL and JSON.\n\nui = true\nlistener ‘tcp’ {\n tls_disable = 1\n address = ‘[::]:8200’\n cluster_address = ‘[::]:8201’\n # Enable unauthenticated metrics access (necessary for Prometheus Operator)\n #telemetry {\n # unauthenticated_metrics_access = ‘true’\n #}\n}\nstorage ‘raft’ {\n path = ‘/vault/data’\n}\nservice_registration ‘kubernetes’ {} | |
| server_ha_config | multiline | ❌ | Note: Configuration files are stored in ConfigMaps so sensitive data\nsuch as passwords should be either mounted through extraSecretEnvironmentVars\nor through a Kube secret. For more information see:\nhttps://developer.hashicorp.com/vault/docs/platform/k8s/helm/run#protecting-sensitive-vault-configurations\nexample:\nui = true\nlistener ‘tcp’ {\n tls_disable = 1\n address = ‘[::]:8200’\n cluster_address = ‘[::]:8201’\n}\nstorage ‘consul’ {\n path = ‘vault’\n address = ‘HOST_IP:8500’\n}\nservice_registration ‘kubernetes’ {}\n# Example configuration for using auto-unseal, using Google Cloud KMS. The\n# GKMS keys must already exist, and the cluster must have a service account\n# that is authorized to access GCP KMS.\nseal ‘gcpckms’ {\n project = ‘vault-helm-dev-246514’\n region = ‘global’\n key_ring = ‘vault-helm-unseal-kr’\n crypto_key = ‘vault-helm-unseal-key’\n}\n# Example configuration for enabling Prometheus metrics.\n# If you are using Prometheus Operator you can enable a ServiceMonitor resource below.\n# You may wish to enable unauthenticated metrics in the listener block above.\ntelemetry {\n prometheus_retention_time = ’30s'\n disable_hostname = true\n} | |
| server_disruption_budget_enabled | boolean | ❌ | true | A disruption budget limits the number of pods of a replicated application\nthat are down simultaneously from voluntary disruptions |
| server_disruption_budget_max_unavailable | string | ❌ | maxUnavailable will default to (n/2)-1 where n is the number of\nreplicas. If you’d like a custom value, you can specify an override here. | |
| server_service_account_create | boolean | ❌ | true | Create service account used to run Vault Server. |
| server_service_account_name | string | ❌ | The name of the service account to use.\nIf not set and create is true, a name is generated using the fullname template | |
| server_service_account_create_secret | boolean | ❌ | true | Create a Secret API object to store a non-expiring token for the service account.\nPrior to v1.24.0, Kubernetes used to generate this secret for each service account by default.\nKubernetes now recommends using short-lived tokens from the TokenRequest API or projected volumes instead if possible.\nFor more details, see https://kubernetes.io/docs/concepts/configuration/secret/#service-account-token-secrets\nserviceAccount.create must be equal to ‘true’ in order to use this feature. |
| server_service_account_annotations | multiline | ❌ | Extra annotations for the serviceAccount definition. This can either be\nYAML or a YAML-formatted multi-line templated string map of the\nannotations to apply to the serviceAccount.\nexample:\nannotations: {} | |
| server_service_account_labels | multiline | ❌ | Extra labels for the serviceAccount definition. This can either be\njson or yaml map of the labels to apply to the serviceAccount.\nexample:\nlabels:\n app.kubernetes.io/name: name\n app.kubernetes.io/instance: instance-name\n component: server | |
| server_service_discovery_enabled | boolean | ❌ | true | Enable or disable a service account role binding with the permissions required for\nVault’s Kubernetes service_registration config option.\nSee https://developer.hashicorp.com/vault/docs/configuration/service-registration/kubernetes |
| server_host_network | boolean | ❌ | false | Whether to use the host network for the Vault server pods. |
| ui_publish_not_ready_addresses | boolean | ❌ | true | UI Publish not ready addresses |
| ui_active_vault_pod_only | boolean | ❌ | false | The service should only contain selectors for active Vault pod |
| ui_service_type | string | ❌ | ClusterIP | UI service type |
| ui_external_port | number | ❌ | Vault UI port. | |
| ui_target_port | number | ❌ | Target port to map to. | |
| ui_service_ip_family_policy | string | ❌ | The IP family and IP families options are to set the behaviour in a dual-stack environment\nConfigures the service’s supported IP family, can be either:\nSingleStack: Single-stack service. The control plane allocates a cluster IP for the Service, using the first configured service cluster IP range.\nPreferDualStack: Allocates IPv4 and IPv6 cluster IPs for the Service.\nRequireDualStack: Allocates Service .spec.ClusterIPs from both IPv4 and IPv6 address ranges. | |
| server_service_ip_families | multiline | ❌ | Sets the families that should be supported and the order in which they should be applied to ClusterIP as well.\nCan be IPv4 and/or IPv6. | |
| ui_service_ip_families | string | ❌ | Sets the families that should be supported and the order in which they should be applied to ClusterIP as well Can be IPv4 and/or IPv6. |