Create MKS Cluster using Terraform
Introduction
This guide demonstrates how to create an MKS (Managed Kubernetes Service) cluster on SPC using the Kosmos Terraform Provider. The kosmos_mkscluster resource provisions MKS infrastructure through Kosmos, which then manages the cluster lifecycle.
Key Benefits:
- Infrastructure as Code for MKS clusters
- Kosmos manages the cluster lifecycle after provisioning
- Consistent management through Kosmos console and CLI
- Integration with existing Terraform workflows
Requirements
| Name | Version |
|---|---|
| Terraform CLI | >= 1.0 |
| Kosmos CLI | >= 4.3.9 |
| AWS CLI | >= 2.27.58 |
| Kosmos Terraform Provider | >= 0.11 |
| AWS Terraform Provider | >= 5.0 |
| TLS Terraform Provider | >= 4.0 |
| Time Terraform Provider | >= 0.9 |
Artifacts
Download the Terraform module and provider from the Terraform Artifacts page:
| Artifact | Version |
|---|---|
| Kosmos Terraform Provider | v0.12.0 |
| MKS (Samsung Private Cloud) Module | v3.3.0 |
Getting Started
Prerequisites
Install the Kosmos provider on your local machine by following the getting started with terraform provider guide .
Ensure you have valid SPC credentials configured in your AWS SDK profile.
- For initial credential setup, see Setting up client interfaces in the SPC documentation.
- For IAM role assignment and trust relationships, see the SCOP guide .
provider "aws" { profile = "YOUR CREDENTIALS PROFILE AT ~/.aws/credentials" ... }Ensure you have a valid Kosmos access key:
- Open Kosmos web-console
- Click your profile icon in the top-right corner, then click
Access Keys - Click
Create access key, thenSave
Ensure you have a fleet in Kosmos. If you don’t have one, contact the Kosmos Administrator.
Ensure you have an OIDC provider entry in your SPC account pointing to the correct Kosmos domain (see OIDC Provider Setup below).
Ensure you have the required IAM roles configured in SPC (see IAM Roles below).
Ensure you have existing VPC and subnets for the MKS cluster. Kosmos requires pre-existing network infrastructure.
Provider Configuration
Configure AWS, Kosmos, and TLS providers:
# providers.tf
terraform {
required_version = ">= 1.0"
required_providers {
aws = {
source = "hashicorp/aws"
version = ">= 5.95"
}
kosmos = {
source = "local/samsung/kosmos"
version = ">= 0.11.0"
}
tls = {
source = "hashicorp/tls"
version = ">= 4.0"
}
}
}
provider "aws" {
profile = "scop"
region = var.spc_region
# SPC-specific endpoints
endpoints {
iam = "https://iam.samsungspc.com"
sts = "https://sts.samsungspc.com"
ec2 = "https://ec2.${var.spc_region}.samsungspc.com"
s3 = "https://s3.${var.spc_region}.samsungspc.com"
kms = "https://kms.${var.spc_region}.samsungspc.com"
}
}
provider "kosmos" {
accesskey = var.kosmos_access_key
endpoint = "https://console.kosmos.spcplatform.com"
}
Kosmos OIDC Provider
Create the OIDC provider using Terraform. The tls_certificate data source dynamically fetches the thumbprint:
data "tls_certificate" "kosmos" {
url = "https://console.kosmos.spcplatform.com/kosmos-oidc"
}
resource "aws_iam_openid_connect_provider" "kosmos" {
url = "https://console.kosmos.spcplatform.com/kosmos-oidc"
client_id_list = ["kosmos-operator"]
thumbprint_list = [data.tls_certificate.kosmos.certificates[0].sha1_fingerprint]
tags = {
Name = "kosmos-oidc-provider"
}
}
IAM Roles for Kosmos
The Kosmos provider requires IAM roles to manage MKS clusters. Use the Kosmos identity module to set up the required roles.
aws_iam_role_policy. You must use managed policies (aws_iam_policy + aws_iam_role_policy_attachment). Terraform will fail with InvalidAction: action is invalid if you attempt to use inline policies.Kosmos Identity Module
Note: Replace
X.Y.Zwith the latest module version. Check the Kosmos Terraform modules for current releases.
The Kosmos identity module handles all IAM setup with proper permissions:
# First, create the OIDC provider
data "tls_certificate" "kosmos" {
url = "https://console.kosmos.spcplatform.com/kosmos-oidc"
}
resource "aws_iam_openid_connect_provider" "kosmos" {
url = "https://console.kosmos.spcplatform.com/kosmos-oidc"
client_id_list = ["kosmos-operator"]
thumbprint_list = [data.tls_certificate.kosmos.certificates[0].sha1_fingerprint]
tags = {
Name = "kosmos-oidc-provider"
}
}
# Use the identity module
module "kosmos_oidc" {
source = "https://s3.ap-southeast-1.amazonaws.com/srin-s3-terraform-modules/kosmos-spc-identity-vX.Y.Z.tar.gz"
oidc_provider_arn = aws_iam_openid_connect_provider.kosmos.arn
fleet_name = var.fleet_name
kosmos_tier = null # Use null for production, "dev" or "stg" for other environments
}
The module creates:
kosmos-operator-{fleet_name}role with comprehensive EKS, EC2, IAM, and KMS permissionsKosmosServiceRolePolicy-{fleet_name}managed policymks-service-role-{fleet_name}for EKS cluster operations
Then reference the module output in your cluster configuration:
resource "kosmos_mkscluster" "cluster" {
# ...
spec = {
mks_config = {
kosmos_role_arn = module.kosmos_oidc.kosmos_operator_role_arn
# ...
}
}
}
EKS Cluster Role
This is the service role assumed by the MKS/EKS control plane:
resource "aws_iam_role" "eks_cluster_role" {
name = "eksClusterRole-${var.cluster_name}"
assume_role_policy = jsonencode({
Version = "2012-10-17"
Statement = [{
Effect = "Allow"
Principal = {
Service = "eks.amazonaws.com"
}
Action = "sts:AssumeRole"
}]
})
}
resource "aws_iam_role_policy_attachment" "eks_cluster_policy" {
policy_arn = "arn:aws:iam::aws:policy/AmazonEKSClusterPolicy"
role = aws_iam_role.eks_cluster_role.name
}
EKS Node Role
This role is assumed by the worker nodes:
resource "aws_iam_role" "eks_node_role" {
name = "eksNodeRole-${var.cluster_name}"
assume_role_policy = jsonencode({
Version = "2012-10-17"
Statement = [{
Effect = "Allow"
Principal = {
Service = "ec2.amazonaws.com"
}
Action = "sts:AssumeRole"
}]
})
}
resource "aws_iam_role_policy_attachment" "node_worker" {
policy_arn = "arn:aws:iam::aws:policy/AmazonEKSWorkerNodePolicy"
role = aws_iam_role.eks_node_role.name
}
resource "aws_iam_role_policy_attachment" "node_cni" {
policy_arn = "arn:aws:iam::aws:policy/AmazonEKS_CNI_Policy"
role = aws_iam_role.eks_node_role.name
}
resource "aws_iam_role_policy_attachment" "node_ecr" {
policy_arn = "arn:aws:iam::aws:policy/AmazonEC2ContainerRegistryReadOnly"
role = aws_iam_role.eks_node_role.name
}
How to Run
Create a
terraform.tfvarsfile with your configuration:kosmos_access_key = "your-kosmos-access-key" spc_region = "ap-southeast-1" fleet_name = "your-fleet-name" cluster_name = "my-mks-cluster" kosmos_owner = "your-kosmos-username" subnet_ids = ["subnet-xxx", "subnet-yyy"]Initialize Terraform:
terraform initReview the planned changes:
terraform planApply the configuration:
terraform applyAfter apply completes, connect the cluster to Kosmos (see Connecting the Cluster to Kosmos ):
aws eks update-kubeconfig --name <CLUSTER_NAME> --region <SPC_REGION> \ --endpoint-url https://eks.<SPC_REGION>.samsungspc.com kosmos join cluster <CLUSTER_NAME> --fleet <FLEET_NAME> --waitTo destroy resources:
terraform destroy
Quick Start Example
This complete example creates an MKS cluster using the Kosmos provider with the identity module:
- Ensure you have valid SPC credentials.
- Ensure you have
aws_cliinstalled, you can use the follow the instructions here to install . - Ensure you have
helmCLI installed, you can follow the instructions here to install . This is required bykosmosCLI to connect the cluster to Kosmos. - Ensure you have
kosmosCLI installed, you can follow the instructions here to install .
# main.tf - Complete MKS cluster using Kosmos provider and identity module
terraform {
required_version = ">= 1.0"
required_providers {
aws = {
source = "hashicorp/aws"
version = ">= 5.0.0"
}
kosmos = {
source = "local/samsung/kosmos"
version = ">= 0.11.0"
}
tls = {
source = "hashicorp/tls"
version = ">= 4.0"
}
time = {
source = "hashicorp/time"
version = ">= 0.9"
}
}
}
# ============================================
# VARIABLES
# ============================================
variable "kosmos_access_key" {
description = "Kosmos API access key"
type = string
sensitive = true
}
variable "spc_region" {
description = "SPC region for the cluster"
type = string
default = "ap-southeast-1"
}
variable "fleet_name" {
description = "Kosmos fleet name"
type = string
}
variable "cluster_name" {
description = "MKS cluster name"
type = string
}
variable "kosmos_owner" {
description = "Kosmos username for cluster ownership"
type = string
}
variable "kubernetes_version" {
description = "Kubernetes version"
type = string
default = "1.31"
}
variable "subnet_ids" {
description = "List of subnet IDs for the cluster"
type = list(string)
}
variable "public_access_cidrs" {
description = "CIDR blocks allowed to access the cluster"
type = list(string)
default = ["0.0.0.0/0"]
}
# ============================================
# PROVIDERS
# ============================================
provider "aws" {
profile = "scop"
region = var.spc_region
# SPC-specific endpoints
endpoints {
iam = "https://iam.samsungspc.com"
sts = "https://sts.samsungspc.com"
ec2 = "https://ec2.${var.spc_region}.samsungspc.com"
s3 = "https://s3.${var.spc_region}.samsungspc.com"
kms = "https://kms.${var.spc_region}.samsungspc.com"
}
}
provider "kosmos" {
accesskey = var.kosmos_access_key
endpoint = "https://console.kosmos.spcplatform.com"
}
# ============================================
# DATA SOURCES
# ============================================
data "aws_caller_identity" "current" {}
# ============================================
# OIDC PROVIDER
# ============================================
data "tls_certificate" "kosmos" {
url = "https://console.kosmos.spcplatform.com/kosmos-oidc"
}
resource "aws_iam_openid_connect_provider" "kosmos" {
url = "https://console.kosmos.spcplatform.com/kosmos-oidc"
client_id_list = ["kosmos-operator"]
thumbprint_list = [data.tls_certificate.kosmos.certificates[0].sha1_fingerprint]
tags = {
Name = "kosmos-oidc-provider"
}
}
# ============================================
# KOSMOS IDENTITY MODULE
# ============================================
module "kosmos_oidc" {
source = "https://s3.ap-southeast-1.amazonaws.com/srin-s3-terraform-modules/kosmos-spc-identity-vX.Y.Z.tar.gz"
oidc_provider_arn = aws_iam_openid_connect_provider.kosmos.arn
fleet_name = var.fleet_name
kosmos_tier = null # Use null for production, "dev" or "stg" for other environments
}
# ============================================
# EKS IAM ROLES
# ============================================
# EKS Cluster Role
resource "aws_iam_role" "eks_cluster_role" {
name = "eksClusterRole-${var.cluster_name}"
assume_role_policy = jsonencode({
Version = "2012-10-17"
Statement = [{
Effect = "Allow"
Principal = { Service = "eks.amazonaws.com" }
Action = "sts:AssumeRole"
}]
})
}
resource "aws_iam_role_policy_attachment" "eks_cluster_policy" {
policy_arn = "arn:aws:iam::aws:policy/AmazonEKSClusterPolicy"
role = aws_iam_role.eks_cluster_role.name
}
# EKS Node Role
resource "aws_iam_role" "eks_node_role" {
name = "eksNodeRole-${var.cluster_name}"
assume_role_policy = jsonencode({
Version = "2012-10-17"
Statement = [{
Effect = "Allow"
Principal = { Service = "ec2.amazonaws.com" }
Action = "sts:AssumeRole"
}]
})
}
resource "aws_iam_role_policy_attachment" "node_worker" {
policy_arn = "arn:aws:iam::aws:policy/AmazonEKSWorkerNodePolicy"
role = aws_iam_role.eks_node_role.name
}
resource "aws_iam_role_policy_attachment" "node_cni" {
policy_arn = "arn:aws:iam::aws:policy/AmazonEKS_CNI_Policy"
role = aws_iam_role.eks_node_role.name
}
resource "aws_iam_role_policy_attachment" "node_ecr" {
policy_arn = "arn:aws:iam::aws:policy/AmazonEC2ContainerRegistryReadOnly"
role = aws_iam_role.eks_node_role.name
}
# ============================================
# DELAY FOR IAM PROPAGATION
# ============================================
resource "time_sleep" "wait_for_iam" {
depends_on = [
module.kosmos_oidc,
aws_iam_role_policy_attachment.eks_cluster_policy,
aws_iam_role_policy_attachment.node_worker,
aws_iam_role_policy_attachment.node_cni,
aws_iam_role_policy_attachment.node_ecr
]
create_duration = "10s"
}
# ============================================
# MKS CLUSTER (KOSMOS PROVIDER)
# ============================================
resource "kosmos_mkscluster" "cluster" {
depends_on = [time_sleep.wait_for_iam]
name = var.cluster_name
namespace = var.fleet_name
spec = {
name = var.cluster_name
authorization = {
owner = {
user = var.kosmos_owner
}
}
mks_config = {
kosmos_jwtclaims = {
iss = ""
}
display_name = var.cluster_name
imported = false
kosmos_role_arn = module.kosmos_oidc.kosmos_operator_role_arn
region = var.spc_region
cluster_role = aws_iam_role.eks_cluster_role.name
kubernetes_version = var.kubernetes_version
public_access = true
private_access = true
public_access_sources = var.public_access_cidrs
subnets = var.subnet_ids
logging_types = ["api", "audit", "authenticator", "controllerManager", "scheduler"]
secrets_encryption = false
ebsCSIDriver = true
tags = {
name = var.cluster_name
owner = var.kosmos_owner
}
node_groups = [
{
nodegroup_name = "default"
node_role = aws_iam_role.eks_node_role.name
version = var.kubernetes_version
instance_type = "m5.large"
disk_size = 30
min_size = 1
max_size = 3
desired_size = 1
subnets = var.subnet_ids
gpu = false
request_spot_instances = false
resource_tags = {
name = "default-resource-tag"
}
labels = {
name = "default"
}
tags = {
owner = var.kosmos_owner
}
}
]
}
}
}
# ============================================
# OUTPUTS
# ============================================
output "cluster_name" {
description = "MKS cluster name"
value = kosmos_mkscluster.cluster.name
}
output "fleet_name" {
description = "Kosmos fleet name"
value = var.fleet_name
}
output "kosmos_role_arn" {
description = "Kosmos operator role ARN"
value = module.kosmos_oidc.kosmos_operator_role_arn
}
Create a terraform.tfvars file:
kosmos_access_key = "your-kosmos-access-key"
spc_region = "ap-southeast-1"
fleet_name = "your-fleet-name"
cluster_name = "my-mks-cluster"
kosmos_owner = "your-kosmos-username"
kubernetes_version = "1.30"
subnet_ids = ["subnet-xxx", "subnet-yyy"]
public_access_cidrs = ["your-ip/32"]
Then run:
terraform init
terraform plan
terraform apply
Variables
| Variable | Req. | Description | Type | Example |
|---|---|---|---|---|
kosmos_access_key | ✓ | Kosmos API access key (provider argument) | string | - |
fleet_name | ✓ | Kosmos fleet where the cluster will be created | string | "production-fleet" |
cluster_name | ✓ | Name of the MKS cluster | string | "my-mks-cluster" |
kosmos_owner | ✓ | Kosmos username for cluster ownership | string | "admin-user" |
spc_region | ✓ | SPC region for the cluster | string | "ap-southeast-1" |
kubernetes_version | ✓ | Kubernetes version | string | "1.30" |
subnet_ids | ✓ | List of subnet IDs for the cluster | list(string) | ["subnet-xxx", "subnet-yyy"] |
public_access_cidrs | ✗ | CIDR blocks allowed to access the cluster | list(string) | ["0.0.0.0/0"] |
Usage Examples
Basic Usage
resource "kosmos_mkscluster" "cluster" {
name = "my-cluster"
namespace = "my-fleet"
spec = {
name = "my-cluster"
authorization = {
owner = {
user = "your-kosmos-username"
}
}
mks_config = {
kosmos_jwtclaims = {
iss = ""
}
display_name = "my-cluster"
imported = false
kosmos_role_arn = module.kosmos_oidc.kosmos_operator_role_arn
region = "ap-southeast-1"
cluster_role = aws_iam_role.eks_cluster_role.name
kubernetes_version = "1.31"
public_access = true
private_access = true
public_access_sources = ["0.0.0.0/0"]
subnets = ["subnet-xxx", "subnet-yyy"]
ebsCSIDriver = true
tags = {
name = "my-cluster"
owner = "your-kosmos-username"
}
node_groups = [{
nodegroup_name = "default"
node_role = aws_iam_role.eks_node_role.name
version = "1.31"
instance_type = "m5.large"
disk_size = 30
min_size = 1
max_size = 3
desired_size = 1
gpu = false
request_spot_instances = false
}]
}
}
}
With Secrets Encryption
resource "aws_kms_key" "eks" {
description = "KMS key for MKS secrets encryption"
deletion_window_in_days = 7
enable_key_rotation = true
}
resource "kosmos_mkscluster" "cluster" {
name = "secure-cluster"
namespace = "my-fleet"
spec = {
name = "secure-cluster"
authorization = {
owner = {
user = "your-kosmos-username"
}
}
mks_config = {
kosmos_jwtclaims = {
iss = ""
}
display_name = "secure-cluster"
imported = false
kosmos_role_arn = module.kosmos_oidc.kosmos_operator_role_arn
region = "ap-southeast-1"
cluster_role = aws_iam_role.eks_cluster_role.name
kubernetes_version = "1.31"
public_access = true
private_access = true
subnets = var.subnet_ids
ebsCSIDriver = true
# Enable secrets encryption
secrets_encryption = true
kms_key = aws_kms_key.eks.arn
tags = {
name = "secure-cluster"
owner = "your-kosmos-username"
}
node_groups = [{
nodegroup_name = "default"
node_role = aws_iam_role.eks_node_role.name
version = "1.31"
instance_type = "m5.large"
disk_size = 30
min_size = 1
max_size = 3
desired_size = 1
gpu = false
request_spot_instances = false
}]
}
}
}
Multiple Node Groups
resource "kosmos_mkscluster" "cluster" {
name = "multi-node-cluster"
namespace = "my-fleet"
spec = {
name = "multi-node-cluster"
authorization = {
owner = {
user = "your-kosmos-username"
}
}
mks_config = {
kosmos_jwtclaims = {
iss = ""
}
display_name = "multi-node-cluster"
imported = false
kosmos_role_arn = module.kosmos_oidc.kosmos_operator_role_arn
region = "ap-southeast-1"
cluster_role = aws_iam_role.eks_cluster_role.name
kubernetes_version = "1.31"
public_access = true
private_access = true
subnets = var.subnet_ids
ebsCSIDriver = true
logging_types = ["api", "audit", "authenticator", "controllerManager", "scheduler"]
tags = {
name = "multi-node-cluster"
owner = "your-kosmos-username"
}
node_groups = [
{
nodegroup_name = "general"
node_role = aws_iam_role.eks_node_role.name
version = "1.31"
instance_type = "m5.large"
disk_size = 30
min_size = 2
max_size = 5
desired_size = 3
gpu = false
request_spot_instances = false
labels = {
workload = "general"
}
},
{
nodegroup_name = "compute"
node_role = aws_iam_role.eks_node_role.name
version = "1.31"
instance_type = "m5.large"
disk_size = 50
min_size = 1
max_size = 10
desired_size = 2
gpu = false
request_spot_instances = false
labels = {
workload = "compute-intensive"
}
}
]
}
}
}
Connecting the Cluster to Kosmos
After the MKS cluster is created via Terraform, you must install the Loft agent to complete the connection to Kosmos. Without this step, the cluster will remain in connecting state.
kosmos_mkscluster resource provisions the EKS infrastructure but does not automatically install the Loft agent. You must run kosmos join cluster after terraform apply completes.Step 1: Configure kubectl Access
First, update your kubeconfig to access the newly created cluster:
# Using AWS CLI with SPC endpoints
aws eks update-kubeconfig --name <CLUSTER_NAME> --region <SPC_REGION> \
--endpoint-url https://eks.<SPC_REGION>.samsungspc.com
Step 2: Install the Loft Agent
Run the join command to install the Loft agent and connect the cluster to Kosmos:
# Login to Kosmos
kosmos login https://console.kosmos.spcplatform.com/ --access-key <ACCESS_KEY>
# Join the cluster to Kosmos (installs Loft agent)
kosmos join cluster <CLUSTER_NAME> --fleet <FLEET_NAME> --wait
The --wait flag ensures the command waits until the cluster is fully initialized.
Step 3: Verify Connection
Check that the cluster status is ready:
# List clusters in your fleet
kosmos list mks --fleet <FLEET_NAME>
# Set kubeconfig to use the cluster via Kosmos
kosmos use cluster <CLUSTER_NAME> --fleet <FLEET_NAME>
# Verify access
kubectl get nodes
Schema
Optional
name(String) Name of the MKSClusternamespace(String) Object name and auth scope, such as for teams and projects (fleet name)spec(Attributes) MKSClusterSpec defines the desired state of MKSCluster (see below for nested schema )
Nested Schema for spec
Required:
mks_config(Attributes) Required. Configuration for MKS operator. (see below for nested schema )
Optional:
authorization(Attributes) Optional. Configuration related to the cluster RBAC settings.description(String) Optional. A human readable description of this cluster.display_name(String) Optional. If specified this name is displayed in the UI instead of the metadata namelogging_config(Attributes) Optional. Logging configuration for this cluster.monitoring_config(Attributes) Optional. Monitoring configuration for this cluster.
Nested Schema for spec.mks_config
Required:
display_name(String) Required. DisplayName is the name of the MKS cluster.imported(Boolean) Required. Set tofalseto create a new cluster,trueto import an existing cluster.kosmos_role_arn(String) Required. ARN of the IAM role used by Kosmos to manage the cluster.region(String) Required. SPC region where the cluster is located.
Required when imported = false:
cluster_role(String) IAM role assumed by the MKS cluster control plane.kubernetes_version(String) Kubernetes version for the cluster.public_access(Boolean) Whether public access is enabled.private_access(Boolean) Whether private access is enabled.subnets(List of String) Subnet IDs for the cluster.
Optional:
delete_on_detachment(Boolean) Whether to delete the cluster when detached from Kosmos.ebs_csidriver(Boolean) Whether to enable the EBS CSI driver.kms_key(String) KMS key ARN for secrets encryption.logging_types(List of String) Logging types enabled on the cluster.node_groups(Attributes List) Node groups for the cluster.public_access_sources(List of String) CIDRs allowed to access via public endpoint.secrets_encryption(Boolean) Whether secrets encryption is enabled.security_groups(List of String) Security group IDs for the cluster.tags(Map of String) Tags applied to the cluster.
Nested Schema for spec.mks_config.node_groups
Required:
nodegroup_name(String) Name of the node group.node_role(String) IAM role assumed by the nodes.version(String) Kubernetes version for the nodes.min_size(Number) Minimum number of nodes.max_size(Number) Maximum number of nodes.desired_size(Number) Desired number of nodes.
Optional:
disk_size(Number) Disk size in GB attached to nodes.instance_type(String) Instance type for nodes.gpu(Boolean) Whether the node group has GPU instances.image_id(String) AMI ID for the node group.labels(Map of String) Labels applied to nodes.subnets(List of String) Subnet IDs for the node group.tags(Map of String) Tags applied to nodes.request_spot_instances(Boolean) Whether to use spot instances.spot_instance_types(List of String) Instance types for spot instances.