Create EKS Cluster using Terraform
Introduction
This is terraform reference script for creating EKS cluster using Kosmos Provider that conform Samsung Security Checklist.
Requirements
| Name | Version |
|---|---|
| Terraform CLI | >= 1.13 |
| Kosmos CLI | >= 4.3.9 |
| AWS CLI | >= 2.27.58 |
| Kosmos Terraform Provider | >= 0.8 |
| AWS Terraform Provider | >= 5.95 |
Artifacts
Download the Terraform module and provider from the Terraform Artifacts page:
| Artifact | Version |
|---|---|
| Kosmos Terraform Provider | v0.12.0 |
| EKS (Amazon Web Services) Module | v3.2.2 |
Getting started
Prerequisites
Install the Kosmos provider on your local machine by following the getting started with terraform provider guide .
Ensure you have valid AWS credentials configured in your AWS SDK profile.
- For AWS CLI credential setup, see Configuring the AWS CLI in the AWS documentation.
provider "aws" { profile = "YOUR CREDENTIALS PROFILE AT ~/.aws/credentials" ... }Ensure the AWS credentials have at least the minimum permissions required .
Ensure you have a valid Kosmos access key:
- Open Kosmos web-console
- Click your profile icon in the top-right corner, then click
Access Keys - Click
Create access key, thenSave
Ensure you have a fleet in Kosmos. If you don’t have one, contact the Kosmos Administrator.
Ensure you have an OIDC provider entry in your AWS account pointing to the correct Kosmos domain (see Kosmos OIDC Provider below).
Ensure you have the
AmazonEKSNodeRoleIAM role configured (see AmazonEKSNodeRole IAM Role below).
Provider Configuration
Create a providers.tf file with both AWS and Kosmos provider configuration:
terraform {
required_version = ">= 1.3"
required_providers {
aws = {
source = "hashicorp/aws"
version = ">= 5.95"
}
kosmos = {
source = "local/samsung/kosmos"
version = ">= 0.11.0"
}
tls = {
source = "hashicorp/tls"
version = ">= 4.0"
}
}
}
provider "aws" {
region = var.aws_region
}
provider "kosmos" {
accesskey = var.kosmos_access_key
endpoint = "https://console.kosmos.spcplatform.com"
}
AmazonEKSNodeRole IAM Role
The Kosmos EKS module requires an IAM role named AmazonEKSNodeRole to exist in your AWS account. The Kosmos webhook validates this role exists before creating a cluster.
Create this role before running terraform apply:
# AmazonEKSNodeRole - Required by Kosmos webhook for EKS node groups
resource "aws_iam_role" "amazon_eks_node_role" {
name = "AmazonEKSNodeRole"
assume_role_policy = jsonencode({
Version = "2012-10-17"
Statement = [{
Effect = "Allow"
Principal = {
Service = "ec2.amazonaws.com"
}
Action = "sts:AssumeRole"
}]
})
tags = {
Name = "AmazonEKSNodeRole"
}
}
resource "aws_iam_role_policy_attachment" "node_worker" {
policy_arn = "arn:aws:iam::aws:policy/AmazonEKSWorkerNodePolicy"
role = aws_iam_role.amazon_eks_node_role.name
}
resource "aws_iam_role_policy_attachment" "node_cni" {
policy_arn = "arn:aws:iam::aws:policy/AmazonEKS_CNI_Policy"
role = aws_iam_role.amazon_eks_node_role.name
}
resource "aws_iam_role_policy_attachment" "node_ecr" {
policy_arn = "arn:aws:iam::aws:policy/AmazonEC2ContainerRegistryReadOnly"
role = aws_iam_role.amazon_eks_node_role.name
}
Kosmos OIDC Provider (Terraform-Native)
Create the OIDC provider using Terraform. The tls_certificate data source dynamically fetches the thumbprint:
# Get OIDC thumbprint using TF-native approach (no openssl required)
data "tls_certificate" "kosmos_oidc" {
url = "https://console.kosmos.spcplatform.com/kosmos-oidc"
}
resource "aws_iam_openid_connect_provider" "kosmos" {
url = "https://console.kosmos.spcplatform.com/kosmos-oidc"
client_id_list = ["sts.amazonaws.com"]
thumbprint_list = [data.tls_certificate.kosmos_oidc.certificates[0].sha1_fingerprint]
tags = {
Name = "kosmos-oidc-provider"
}
}
How to run
Download this reference script
Create a new file called
terraform.tfvarsinside working directory. You can refer toterraform.tfvars.examplefor minimum variable that need to be provided.Important: If you are using Kosmos dev or stg, you need to specify
kosmos_tiervariable todevorstginsidevariable.tfvarsInitialize the working directory and download Terraform providers and modules
terraform initApply the script by running this command and read thoroughly on resources to be created , then type
yeswhen promptedterraform applyor
terraform apply --var-file=terraform.tfvarsImportant: when you don’t defined the
--var-fileargument, terraform will automatically look for theterraform.tfvarsfile, if the file dosen’t exist, then you will be promted to input the required variable manually.To destroy all the resources, run below command
terraform destroyor
terraform destroy --var-file=terraform.tfvars
Connecting EKS cluster to Kosmos
Prerequisites
- Ensure you have valid AWS credentials.
- Ensure you have
aws cliinstalled, you can follow the instructions here to install . - Ensure you have
helmCLI installed, you can follow the instructions here to install . This is required bykosmosCLI to connect the cluster to Kosmos. - Ensure you have
kosmosCLI installed, you can follow the instructions here to install .
How to connect cluster to Kosmos
You need to add your role to the Cluster access entry by:
aws eks create-access-entry --cluster-name <CLUSTER NAME> --principal-arn <YOUR ROLE> --type STANDARD --region <AWS REGION> aws eks associate-access-policy --cluster-name <CLUSTER NAME> --principal-arn <YOUR ROLE> --policy-arn arn:aws:eks::aws:cluster-access-policy/AmazonEKSClusterAdminPolicy --access-scope type=cluster --region <AWS REGION>You can get your
principal-arnby following these few stepsFirstly run this command using the desired profile
aws sts get-caller-identity --region <REGION>From this command you get your Role Name
{ "UserId": "XXXXXXXX:session-name", "Account": "123456789012", "Arn": "arn:aws:sts::123456789012:assumed-role/<YourRoleName>/session-name" }Once you have your Role Name, run this next command
aws iam get-role --role-name <YourRoleName>--query 'Role.Arn' --output text --region <REGION>This should return your
principal-arnfor your account, normally its in this formatarn:aws:iam::123456789012:role/<YourRoleName>
Update your kubeconfig using this command
aws eks update-kubeconfig --name <CLUSTER NAME> --region <AWS REGION>Make sure you are able to call kubernetes API using this context
kubectl get nodesLogin to kosmos CLI
kosmos login https://console.kosmos.spcplatform.com/ --access-key <ACCESS KEY>connect kosmos to the EKS cluster by using this command
kosmos join cluster <CLUSTER NAME> --fleet <FLEET NAME>Check if kosmos already connected to the cluster by using this command, wait until the STATUS is ready
kosmos list clusters --fleet <FLEET NAME>Check if you can connect to the cluster by updating kube config to use kosmos context
kosmos use cluster <CLUSTER NAME> --fleet <FLEET NAME>
Variables
Variables | Req. | Description | Type | Value example |
|---|---|---|---|---|
| fleet_name | ✓ | To target which fleet in kosmos Where the cluster will be deployed | string | “fleet1” |
| kosmos_owner | ✓ | kosmos user id as owner of the cluster | string | “kosmos_owner” |
| cluster_name | ✓ | EKS Cluster name | string | “eks-cluster-name” |
| oidc_provider_arn | ✓ | OIDC ARN to get temporary credentials to connect to AWS | string | “arn:aws:iam::123456789:oidc-provider/console.kosmos.spcplatform.com/oidc-name” |
| public_access_cidrs | ✓ | List of IP addresses that can access the cluster | list(string) | [“192.168.41.11/32”, “192.168.17.98/32”] |
| vpc_cidr | ✗ | IP Addresses for the VPC | string | 10.0.0.0/16 |
| external_nat_ip_ids | ✗ | List of Elastic IDs addresses to be used for the NAT Gateway, default is [] will automatically asign an IP to the NAT Gateway | list(string) | [“eipalloc-123456789”, “eipalloc-abcde12345”] |
| enable_nat_gateway | ✗ | A Boolean to create a NAT Gateway. Default is true. | bool | true |
| aws_region | ✓ | AWS region target | string | “ap-southeast-1” |
| eks_version | ✓ | Kubernetes Version to use | string | “1.30” |
| cluster_public_access | ✗ | whether to let public access the cluster or not | bool | true |
| bastion_instance_type | ✗ | The instance type for the bastion host. default is ‘t3.micro’. | string | t3.micro |
| bastion_volume_size | ✗ | The volume size in GB for the bastion host’s EBS volume. default is 20 GB. | number | 20 |
| bastion_ssh_port | ✗ | The port number for SSH access to the bastion host. default is 4222. | number | 4222 |
| bastion_state | ✗ | State of the Bastion instance. Valid values are stopped and running. Default is stopped. | string | stopped |
| bastion_eip_id | ✗ | Elastic IPs ID to be assigned to the Bastion instance, If empty string, it will create its own Elastic IP. Default is null. | string | eipalloc-0000000000 |
| ami_type | ✗ | EKS-optimized AMI type | string | “amazon-linux-2023/x86_64/standard” |
| node_groups | ✗ | Node Groupds are the node groups associated with the kosmos cluster, here for more info. | object | [] |
| create_vpc | ✗ | A boolean flag indicating whether to create a new VPC for the EKS cluster. | bool | true |
| create_eks_node_security_group | ✗ | A boolean flag indicating whether to create a new security group for the EKS control plane. | bool | true |
| vpc_id | ✗ | Required if create_vpc is false, required for creating the bastion. | string | null |
| eks_node_security_group_ids | ✗ | Required if create_eks_control_plane_security_group is false, ID of the AWS security group to associate with eks cluster. | string | null |
| eks_subnet_ids | ✗ | Required if create_vpc is false, List of subnet ids for the eks cluster. | list(string) | [] |
| eks_bastion_subnet_id | ✗ | Subnet ID used by the Bastion Instance, If not defined no Bastion Instance created (This only applies if create_vpc is false) | string | null |
| vpc_endpoint_security_group_ids | ✗ | “Required if create_eks_node_security_group is true, ID of the AWS security group to associate with vpc endpoint. | string | "" |
| enabled_vpc_endpoint_gateway | ✗ | Map of services to enable VPC endpoints for the EKS cluster type Gateway bastion. | list(string) | ["s3"] |
| enabled_vpc_endpoint_interface | ✗ | Map of services to enable VPC endpoints for the EKS cluster type Interface. | list(string) | ["ec2", "ecr.api", "ecr.dkr", "eks"] |
| node_group_security_group_egress_rule | ✗ | Map of Egress Rule for the Node Group’s Security Group Rules, for reference go to aws vpc_security_group_rule docs. | map(object) | {} |
| node_group_security_group_ingress_rule | ✗ | Map of Ingress Rule for the Node Group’s Security Group Rules, for reference go to aws vpc_security_group_rule docs. | map(object) | {} |
| create_eks_cluster_security_group | ✗ | A boolean flag indicating whether to create a new security group for the EKS Cluster. | map(object) | {} |
| eks_cluster_security_group_ids | ✗ | Required if create_eks_cluster_security_group is false, ID of the AWS security group to associate with eks cluster. | map(object) | {} |
| cluster_security_group_ingress_rule | ✗ | Map of Ingress Rule for the Cluster’s Security Group Rules, for reference go to aws vpc_security_group_rule docs. | map(object) | {} |
| cluster_security_group_egress_rule | ✗ | Map of Ingress Rule for the Cluster’s Security Group Rules, for reference go to aws vpc_security_group_rule docs. | map(object) | {} |
| enable_irsa | ✗ | A Boolean to create an IAM OIDC identity provider for the EKS cluster. Intended for IRSA use case and should be set to true if you want to use IRSA | bool | true |
| oidc_provider_audiences | ✗ | Audiences for the IAM OIDC identity provider | list(string) | {} |
Usage examples
Note: Replace
X.Y.Zwith the latest module version. Check the Kosmos Terraform modules for current releases.
Basic usage
module "eks_cluster" {
source = "https://srin-s3-terraform-modules.s3.ap-southeast-1.amazonaws.com/terraform-kosmos-eks-vX.Y.Z.tar.gz"
fleet_name = "production-fleet"
kosmos_owner = "admin-user"
aws_region = "ap-northeast-2"
cluster_name = "prod-eks-cluster"
eks_version = "1.30"
oidc_provider_arn = "arn:aws:iam::123456789:oidc-provider/console.kosmos.spcplatform.com/oidc-name"
public_access_cidrs = ["210.94.41.89/32"]
}
Advanced usage with custom VPC
module "eks_cluster" {
source = "https://srin-s3-terraform-modules.s3.ap-southeast-1.amazonaws.com/terraform-kosmos-eks-vX.Y.Z.tar.gz"
create_vpc = false
vpc_id = "vpc-12345678"
# ... other variables
}
Custom security group rules
There are 4 Security groups that is created in this module
- Node Group Security Group Rules
aws_security_group_node_id - Cluster Security Group Rules
aws_security_group_cluster_id - VPC Endpoint Securoty Groups
aws_security_group_endpoint_id - Bastion Security Groups
aws_security_group_bastion_id
But only 2 Security Groups able to be customised
- Node Group Security Group Rules
node_group_security_group_egress_rulenode_group_security_group_ingress_rule
- Cluster Security Group Rules
cluster_security_group_ingress_rulecluster_security_group_egress_rule
Below is an example of using a customised security group for the Egress Rule of the Node Group Security Group
The module creates the Rules for each security group based on the aws resource aws_security_group_rule, please refer the AWS docs for
security group rule
for more info
module "eks_cluster" {
source = "https://srin-s3-terraform-modules.s3.ap-southeast-1.amazonaws.com/terraform-kosmos-eks-vX.Y.Z.tar.gz"
# EKS Node Group Security Group
node_group_security_group_egress_rule = {
allow_all = {
from_port = 0
to_port = 65535
protocol = "all"
cidr_blocks = ["0.0.0.0/0"]
description = "Allow all communication"
}
cluster-node = {
from_port = 0
to_port = 65535
protocol = "all"
source_security_group_id = module.eks_cluster.aws_security_group_cluster_id
description = "Allow all communication from cluster to node"
}
}
}
Fully private cluster
- Disabled NAT Gateway by setting
enable_nat_gatewaytofalse - Setting up the minimum Security group permission
- Setting up the minimum VPC Endpoint requred
module "eks_cluster" {
source = "https://srin-s3-terraform-modules.s3.ap-southeast-1.amazonaws.com/terraform-kosmos-eks-vX.Y.Z.tar.gz"
# EKS Node Group Security Group
node_group_security_group_egress_rule = {}
node_group_security_group_ingress_rule = {}
# EKS Cluster Security Group
cluster_security_group_ingress_rule = {}
cluster_security_group_egress_rule = {}
# NAT Gateway
enable_nat_gateway = false
# VPC Endpoint
enabled_vpc_endpoint_gateway = ["s3"]
enabled_vpc_endpoint_interface = ["ec2", "ecr.api", "ecr.dkr", "eks", "kms", "logs", "elasticloadbalancing", "autoscaling", "eks-auth", "sts"]
}
Semi private cluster
- Enabled NAT Gateway by setting
enable_nat_gatewaytotrue - Defined the Security group permission
- Setting up the minimum VPC Endpoint requred
module "eks_cluster" {
source = "https://srin-s3-terraform-modules.s3.ap-southeast-1.amazonaws.com/terraform-kosmos-eks-vX.Y.Z.tar.gz"
# EKS Node Group Security Group
node_group_security_group_egress_rule = {
allow_all = {
from_port = 443
to_port = 443
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
description = "Allow all communication"
}
cluster-node = {
from_port = 443
to_port = 443
protocol = "all"
source_security_group_id = module.eks_cluster.aws_security_group_cluster_id
description = "Allow all communication from cluster to node"
}
}
node_group_security_group_ingress_rule = {
cluster-node = {
from_port = 443
to_port = 443
protocol = "all"
source_security_group_id = module.eks_cluster.aws_security_group_cluster_id
description = "Allow all communication from node to cluster"
}
}
# EKS Cluster Security Group
cluster_security_group_ingress_rule = {
cluster-node = {
from_port = 443
to_port = 443
protocol = "all"
source_security_group_id = module.eks_cluster.aws_security_group_node_id
description = "Allow all communication from node to cluster"
}
}
cluster_security_group_egress_rule = {
allow_all = {
from_port = 443
to_port = 443
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
description = "Allow all communication"
}
cluster-node = {
from_port = 443
to_port = 443
protocol = "all"
source_security_group_id = module.eks_cluster.aws_security_group_node_id
description = "Allow all communication from cluster to node"
}
}
# NAT Gateway
enable_nat_gateway = true
# VPC Endpoint
enabled_vpc_endpoint_gateway = ["s3"]
enabled_vpc_endpoint_interface = ["ec2", "ecr.api", "ecr.dkr", "eks", "kms", "logs", "elasticloadbalancing", "autoscaling", "eks-auth", "sts"]
}
Quick Start Example
This complete example creates an EKS cluster using the Kosmos provider with the EKS module:
- Ensure you have valid AWS credentials.
- Ensure you have
aws cliinstalled, you can follow the instructions here to install . - Ensure you have
helmCLI installed, you can follow the instructions here to install . This is required bykosmosCLI to connect the cluster to Kosmos. - Ensure you have
kosmosCLI installed, you can follow the instructions here to install .
# providers.tf / main.tf - Complete working example
terraform {
required_version = ">= 1.3"
required_providers {
aws = {
source = "hashicorp/aws"
version = ">= 5.95"
}
kosmos = {
source = "local/samsung/kosmos"
version = ">= 0.11.0"
}
tls = {
source = "hashicorp/tls"
version = ">= 4.0"
}
}
}
variable "kosmos_access_key" {
description = "Kosmos API access key"
type = string
sensitive = true
}
variable "aws_region" {
description = "AWS region"
type = string
default = "ap-northeast-2"
}
provider "aws" {
region = var.aws_region
}
provider "kosmos" {
accesskey = var.kosmos_access_key
endpoint = "https://console.kosmos.spcplatform.com"
}
# ============================================
# PREREQUISITES
# ============================================
# 1. Kosmos OIDC Provider (TF-native thumbprint extraction)
data "tls_certificate" "kosmos_oidc" {
url = "https://console.kosmos.spcplatform.com/kosmos-oidc"
}
resource "aws_iam_openid_connect_provider" "kosmos" {
url = "https://console.kosmos.spcplatform.com/kosmos-oidc"
client_id_list = ["sts.amazonaws.com"]
thumbprint_list = [data.tls_certificate.kosmos_oidc.certificates[0].sha1_fingerprint]
tags = {
Name = "kosmos-oidc-provider"
}
}
# 2. AmazonEKSNodeRole (required by Kosmos webhook)
resource "aws_iam_role" "amazon_eks_node_role" {
name = "AmazonEKSNodeRole"
assume_role_policy = jsonencode({
Version = "2012-10-17"
Statement = [{
Effect = "Allow"
Principal = { Service = "ec2.amazonaws.com" }
Action = "sts:AssumeRole"
}]
})
}
resource "aws_iam_role_policy_attachment" "node_worker" {
policy_arn = "arn:aws:iam::aws:policy/AmazonEKSWorkerNodePolicy"
role = aws_iam_role.amazon_eks_node_role.name
}
resource "aws_iam_role_policy_attachment" "node_cni" {
policy_arn = "arn:aws:iam::aws:policy/AmazonEKS_CNI_Policy"
role = aws_iam_role.amazon_eks_node_role.name
}
resource "aws_iam_role_policy_attachment" "node_ecr" {
policy_arn = "arn:aws:iam::aws:policy/AmazonEC2ContainerRegistryReadOnly"
role = aws_iam_role.amazon_eks_node_role.name
}
# ============================================
# EKS CLUSTER MODULE
# ============================================
module "eks" {
source = "https://srin-s3-terraform-modules.s3.ap-southeast-1.amazonaws.com/terraform-kosmos-eks-vX.Y.Z.tar.gz"
# Required variables
fleet_name = "your-fleet-name"
kosmos_owner = "your-kosmos-username"
aws_region = var.aws_region
cluster_name = "my-eks-cluster"
eks_version = "1.30"
oidc_provider_arn = aws_iam_openid_connect_provider.kosmos.arn
public_access_cidrs = ["your-ip-here/32"]
# Ensure prerequisites are created first
depends_on = [
aws_iam_openid_connect_provider.kosmos,
aws_iam_role.amazon_eks_node_role,
aws_iam_role_policy_attachment.node_worker,
aws_iam_role_policy_attachment.node_cni,
aws_iam_role_policy_attachment.node_ecr,
]
}
Create a terraform.tfvars file:
kosmos_access_key = "your-kosmos-access-key"
aws_region = "ap-northeast-2"
Then run:
terraform init
terraform plan
terraform apply
Created resource
AWS
- VPC
- 3 Public Subnets
- 3 Private Subnets
- Toggle NAT Gateway with 1 Elastic-IP or a Custom Elastic-IP
- Internet Gateway
- 1 Public route table with access to local and internet gateway
- 1 Private route table with access to local and NAT Gateway
- Bastion Instance
- KMS
- S3 Bucket KMS
- Cluster KMS
- Security Group
- Default security group
- EKS Cluster security group with customizeable permission
- EKS Node group security group with customizeable permission
- Bastion Node Group
- VPC Endpoint Node Group
- S3 Bucket
- Bucket logging enabled
- Server side encryption
- 365 days data retention
- IAM
- Kosmos role (prerequisite for eksCluster resource)
- EKS cluster role with AmazonEKSClusterPolicy & AmazonEKSVPCResourceController
Kosmos
- EKS Cluster
- Enabled Logging (API Server, Audit, Authenticator, Controller Manager, Scheduler)
- Associated with private subnets only
- 1 Nodegroup
- Security Group
- EKS Cluster default security group
Default created security group rules
Node group
Inbound rules
| Port range | Protocol | Source | Description |
|---|---|---|---|
| 443 | TCP | Cluster Security Group | HTTPS from cluster to node |
| 10250 | TCP | Cluster Security Group | Kubernetes API from cluster to node |
| 8443 | TCP | Cluster Security Group | Gatekeeper Webhook - Policy Management feature |
| 8080 | TCP | Cluster Security Group | Loft Agent - HTTP traffic |
| 9090 | TCP | Cluster Security Group | Loft Agent - sleep & wakeup |
| 9443 | TCP | Cluster Security Group | Loft Agent - Webhook |
| 9444 | TCP | Cluster Security Group | Loft Agent - Apiserver |
| 10443 | TCP | Cluster Security Group | Loft Agent - HTTPS traffic |
| 8443 | TCP | Self | Gatekeeper Webhook - Policy Management feature |
| 53 | TCP | Self | Node from node CoreDNS |
| 53 | UDP | Self | Node from node CoreDNS |
Outbound rules
| Port range | Protocol | Source | Description |
|---|---|---|---|
| 443 | TCP | Cluster Security Group | Node to Cluster API 443/tcp |
| 443 | TCP | VPC Endpoints Security Group | Node to VPC Endpoints 443/tcp |
| 443 | TCP | AWS S3 Prefix List | Node to S3 |
| 8443 | TCP | Self | Gatekeeper Webhook - Policy Management feature |
| 53 | TCP | Self | CoreDNS with TCP from nodes to node |
| 53 | UDP | Self | CoreDNS with TCP from nodes to node |
| 53 | TCP | Cluster Security Group | CoreDNS with TCP from nodes to cluster |
| 53 | UDP | Cluster Security Group | CoreDNS with UDP from nodes to cluster |
Cluster
Inbound rules
| Port range | Protocol | Source | Description |
|---|---|---|---|
| 443 | TCP | Node Group Security Group | HTTPS from node to cluster |
| 53 | TCP | Node Group Security Group | CoreDNS with TCP from nodes to cluster |
| 53 | UDP | Node Group Security Group | CoreDNS with UDP from nodes to cluster |
Outbound rules
| Port range | Protocol | Source | Description |
|---|---|---|---|
| 443 | TCP | Node Group Security Group | HTTPS from cluster to node |
| 10250 | TCP | Node Group Security Group | Kubernetes API from cluster to node |
| 8443 | TCP | Node Group Security Group | Gatekeeper Webhook - Policy Management feature |
| 8080 | TCP | Node Group Security Group | Loft Agent - HTTP traffic |
| 9090 | TCP | Node Group Security Group | Loft Agent - sleep & wakeup |
| 9443 | TCP | Node Group Security Group | Loft Agent - Webhook |
| 9444 | TCP | Node Group Security Group | Loft Agent - Apiserver |
| 10443 | TCP | Node Group Security Group | Loft Agent - HTTPS traffic |
VPC endpoint
Inbound rules
| Port range | Protocol | Source | Description |
|---|---|---|---|
| 443 | TCP | Node Group Security Group | VPC Endpoints from Node 443/tcp |
Outbound rules
No Rules Defined
Bastion
Inbound rules
| Port range | Protocol | Source | Description |
|---|---|---|---|
bastion_ssh_port | TCP | public_access_cidrs | [SEC_GW] SSH from public CIDR blocks |
Outbound rules
No Rules Defined
Security checklist
List of checklist that conform
EC2
- Instance metadata Service
Ensure that the vulnerable version of Instance metadata service is not in use (only IMDS V2 should be in use, and if not necessary, Instance metadata service should be disabled)
metadata_options { http_endpoint = "enabled" http_tokens = "required" http_put_response_hop_limit = 2 }
- Instance metadata Service
EKS
Cluster Management
Ensure that “Secrets encryption” is turned on
secrets_encryption = true kms_key = module.kms.key_arn
Networking Management
Ensure the API Server Endpoint Access is private and accept the requests only from the EKS VPC
private_access = trueIf the value of “API server endpoint access” is ‘Public’, ensure that the required access targets are limited.
public_access_sources = var.public_access_cidrs
Logging Management
logging_types = ["api", "audit", "authenticator", "controllerManager", "scheduler"]
S3 Bucket Data Protection
- In-Transit Encrypted
Ensure that S3 buckets use encrypted communication protocol (HTTPS)
attach_deny_insecure_transport_policy = true
- In-Transit Encrypted
S3 Bucket Management
S3 Assets Management (Required Tags)
Ensure that the required tags are attached to all the S3 buckets.
tags = { "SEC_ASSETS_PII" = "N" "SEC_ASSETS_PUBLIC" = "N" }
Data Retention Policy
Ensure that lifecycle rule is set on confidential/personal information containing S3 buckets to delete the data periodically.
lifecycle_rule = [ { id = "data-retention" enabled = true expiration = { days = 365 } } ]
Logging Configuration
- Enabling VPC Flow logs
Ensure that the VPC Flow logs is enabled to log.
enable_flow_log = true flow_log_destination_arn = module.s3_bucket.s3_bucket_arn flow_log_destination_type = "s3"
- Enabling VPC Flow logs
KMS
- Key generation
- Ensure that keys are dedicated for the sole purpose
- Key Rotation Configuration
- Ensure that the Key Rotation is activated.
- Key generation
List of checklist that not conform
- VPC Configuration
- Private Subnet Access Control
- Check if nat is connected to routetables of private subnet.
- Private Subnet Access Control
- Network ACLs & Security groups
- Security Group Management
- Ensure that Security Group’s inbound / outbound rules comply the following management policy.
- A Policy allowing wide range of CIDR blocks (exceeding 24 bit mask)
- Do not use anywhere outbound (0.0.0.0/0)
- A Policy allowing all ports
- Ensure that Security Group’s inbound / outbound rules comply the following management policy.
- SG Description Management
- Ensure that Inbound / Outbound rules of the Security Groups have the mandatory description
- Security Group Management
- EKS
- Security Group Management
- Ensure that Cluster’s security groups only allow the communications necessary for the EKS Cluster.
- Ensure that Additional security groups(for Control Plane) only allow the communication which is necessary for EKS Cluster’s Control Plane operations.
- Security Group Management
Required permissions
KMS permissions
kms:CreateKey
kms:DescribeKey
kms:EnableKey
kms:DisableKey
kms:ScheduleKeyDeletion
kms:CancelKeyDeletion
kms:CreateAlias
kms:DeleteAlias
kms:UpdateAlias
kms:ListAliases
kms:PutKeyPolicy
kms:GetKeyPolicy
kms:ListKeys
IAM permissions
iam:CreateRole
iam:GetRole
iam:DeleteRole
iam:UpdateAssumeRolePolicy
iam:AttachRolePolicy
iam:DetachRolePolicy
iam:ListAttachedRolePolicies
iam:PassRole
iam:GetContextKeysForCustomPolicy
iam:GetContextKeysForPrincipalPolicy
sts:GetCallerIdentity
iam:CreatePolicy
iam:DeletePolicy
iam:GetPolicy
iam:ListPolicyVersions
iam:CreatePolicyVersion
iam:DeletePolicyVersion
iam:SetDefaultPolicyVersion
iam:CreateOpenIDConnectProvider
iam:DeleteOpenIDConnectProvider
iam:GetOpenIDConnectProvider
S3 permissions
s3:CreateBucket
s3:DeleteBucket
s3:PutBucketAcl
s3:GetBucketAcl
s3:PutBucketPolicy
s3:GetBucketPolicy
s3:PutBucketPublicAccessBlock
s3:GetBucketPublicAccessBlock
s3:PutBucketVersioning
s3:GetBucketVersioning
s3:PutBucketLogging
s3:GetBucketLogging
s3:PutBucketLifecycleConfiguration
s3:GetBucketLifecycleConfiguration
s3:PutEncryptionConfiguration
s3:GetEncryptionConfiguration
s3:PutBucketTagging
s3:GetBucketTagging
s3:DeleteBucketPolicy
s3:DeleteBucketTagging
s3:GetBucketLocation
s3:ListBucket
VPC permissions
ec2:CreateVpc
ec2:DeleteVpc
ec2:DescribeVpcs
ec2:ModifyVpcAttribute
ec2:CreateSubnet
ec2:DeleteSubnet
ec2:DescribeSubnets
ec2:CreateRouteTable
ec2:DeleteRouteTable
ec2:AssociateRouteTable
ec2:DisassociateRouteTable
ec2:CreateRoute
ec2:DeleteRoute
ec2:ReplaceRoute
ec2:ReplaceRouteTableAssociation
ec2:CreateInternetGateway
ec2:AttachInternetGateway
ec2:DetachInternetGateway
ec2:DeleteInternetGateway
ec2:CreateNatGateway
ec2:DeleteNatGateway
ec2:DescribeNatGateways
ec2:AllocateAddress
ec2:ReleaseAddress
ec2:CreateSecurityGroup
ec2:DeleteSecurityGroup
ec2:AuthorizeSecurityGroupIngress
ec2:RevokeSecurityGroupIngress
ec2:AuthorizeSecurityGroupEgress
ec2:RevokeSecurityGroupEgress
ec2:CreateNetworkAcl
ec2:DeleteNetworkAcl
ec2:CreateNetworkAclEntry
ec2:DeleteNetworkAclEntry
ec2:AssociateNetworkAcl
ec2:DisassociateNetworkAcl
ec2:CreateVpcEndpoint
ec2:DeleteVpcEndpoints
ec2:DescribeVpcEndpoints
ec2:CreateFlowLogs
ec2:DeleteFlowLogs
ec2:DescribeFlowLogs
ec2:DescribeNetworkInterfaces
ec2:DescribeTags
ec2:DescribeRouteTables
ec2:DescribeVpcs
ec2:DescribeSecurityGroups
ec2:CreateTags
ec2:DescribeAvailabilityZones
EKS permissions
eks:*
autoscaling:CreateAutoScalingGroup
autoscaling:UpdateAutoScalingGroup
autoscaling:DeleteAutoScalingGroup
autoscaling:DescribeAutoScalingGroups
logs:CreateLogGroup
logs:PutRetentionPolicy
logs:DescribeLogGroups
logs:PutRetentionPolicy
kosmos_eksCluster (Resource)
Example Usage
resource "kosmos_eksCluster" "ekscluster" {
depends_on = [module.vpc, module.kosmos_oidc, aws_iam_role_policy_attachment.cluster_eks_policy_attachment, aws_iam_role_policy_attachment.eks_vpc_controller_policy_attachment]
name = var.cluster_name
namespace = var.fleet_name
spec = {
name = var.cluster_name
authorization = {
owner = {
user = var.kosmos_owner
}
}
eks_config = {
kosmos_jwtclaims = {
iss = ""
}
kubernetes_version = var.eks_version
public_access = var.cluster_public_access
private_access = true
kosmos_role_arn = module.kosmos_oidc.kosmos_role_arn
display_name = var.cluster_name
region = var.aws_region
logging_types = local.logging_types
secrets_encryption = true
tags = {
name = var.cluster_name
owner = var.kosmos_owner
}
subnets = local.eks_subnet_ids
security_groups = [local.eks_cluster_security_group_ids]
public_access_sources = var.public_access_cidrs
ebsCSIDriver = true
imported = false
kms_key = module.kms_cluster.key_arn
cluster_role = aws_iam_role.cluster_role.name
node_groups = [
for node_group in var.node_groups : {
nodegroup_name = node_group.nodegroup_name
node_role = "AmazonEKSNodeRole"
resource_tags = merge({
name = "${node_group.nodegroup_name}-resource-tag"
}, node_group.resource_tags)
disk_size = node_group.disk_size
instance_type = ""
version = var.eks_version
min_size = node_group.min_size
max_size = node_group.max_size
desired_size = node_group.desired_size
gpu = node_group.gpu
subnets = local.eks_subnet_ids
image_id = coalesce(node_group.image_id, data.aws_ssm_parameter.eks_ami.value)
tags = merge({
owner = var.kosmos_owner
}, node_group.tags)
labels = merge({
name = node_group.nodegroup_name
}, node_group.labels)
request_spot_instances = node_group.request_spot_instances
spot_instance_types = node_group.spot_instance_types
launch_template = {
id = aws_launch_template.node_group[node_group.nodegroup_name].id
name = aws_launch_template.node_group[node_group.nodegroup_name].name
version = aws_launch_template.node_group[node_group.nodegroup_name].latest_version
}
}
]
}
}
}
Schema
Required
spec(Attributes) EKSClusterSpec defines the specification of EKSCluster (see below for nested schema )
Optional
name(String) name of the EKSClusternamespace(String) object name and auth scope, such as for teams and projects
Nested schema for spec
Required:
eks_config(Attributes) Required. Configuration for eks operator. (see below for nested schema )
Optional:
authorization(Attributes) Optional. Configuration related to the cluster RBAC settings. (see below for nested schema )binary_authorization(Attributes) Optional. Binary Authorization configuration for this cluster. (see below for nested schema )description(String) Optional. A human readable description of this cluster. Cannot be longer than 255 UTF-8 encoded bytes.display_name(String) Optional. If specified this name is displayed in the UI instead of the metadata namelogging_config(Attributes) Optional. Logging configuration for this cluster. (see below for nested schema )monitoring_config(Attributes) Optional. Monitoring configuration for this cluster. (see below for nested schema )name(String) Cluster name. It will be deprecatedoidc_config(Attributes) Optional. OpenID Connect (OIDC) configuration for the cluster. (see below for nested schema )owner(String) Optional. Owner of the cluster. it will be filled by kosmos
Nested schema for spec.eks_config
Required:
display_name(String) Required. DisplayName is the name of the EKS cluster.imported(Boolean) Required. Imported indicates whether the cluster is imported.kosmos_role_arn(String) Required. KosmosRoleArn is the ARN of the role used by Kosmos to manage the EKS cluster.region(String) Required. Region is the AWS region where the cluster is located.
Optional:
bootstrap_access_entry(Attributes) Optional. BootstrapAccessEntry is the access entry that allows bootstrap access to the cluster. (see below for nested schema )cluster_role(String) Optional. ClusterRole is the IAM role assumed by the EKS cluster. This role is used to grant permissions to the cluster.delete_on_detachment(Boolean) DeleteOnDetachment indicates whether the cluster should be deleted when it is detached from the kosmos. Required if imported==falseebs_csidriver(Boolean) Optional. EBSCSIDriver indicates whether the EBS CSI driver is enabled on the cluster.kms_key(String) Optional. KmsKey is the KMS key used for secrets encryption.kubernetes_version(String) KubernetesVersion is the version of Kubernetes running on the cluster. Required if imported==falselogging_types(List of String) Optional. LoggingTypes are the types of logging enabled on the cluster.node_groups(Attributes List) Optional. NodeGroups are the node groups associated with the cluster. (see below for nested schema )private_access(Boolean) PrivateAccess indicates whether private access is enabled on the cluster. Required if imported==falsepublic_access(Boolean) PublicAccess indicates whether public access is enabled on the cluster. Required if imported==falsepublic_access_sources(List of String) PublicAccessSources are the sources allowed to access the cluster via public access. Required if imported==falsesecrets_encryption(Boolean) Optional. SecretsEncryption indicates whether secrets encryption is enabled on the cluster.security_groups(List of String) Optional. SecurityGroups are the security groups associated with the cluster.subnets(List of String) Optional. Subnets are the subnets associated with the cluster.tags(Map of String) Optional. Tags are the tags applied to the EKS cluster.
Nested schema for spec.eks_config.bootstrap_access_entry
Required:
principal_arn(String) Required. PrincipalArn is the ARN of the IAM role or user granted access to the cluster. Note: STS session principals cannot be used with access entries.
Nested schema for spec.eks_config.node_groups
Required:
desired_size(Number) Required. DesiredSize is the desired number of nodes in the node group.max_size(Number) Required. MaxSize is the maximum number of nodes in the node group.min_size(Number) Required. MinSize is the minimum number of nodes in the node group.nodegroup_name(String) NodeGroupName is the name of the node group.version(String) Required. Version is the Kubernetes version running on the nodes.
Optional:
arm(Boolean) Optional.disk_size(Number) DiskSize is the size of the disk attached to the nodes.ec2ssh_key(String) Optional. Ec2SshKey is the SSH key used to connect to the nodes.gpu(Boolean) Optional. Gpu indicates whether the node group has GPU instances.image_id(String) Optional. ImageID is the AMI ID used for the node group.instance_type(String) InstanceType is the type of instance used for the node group.labels(Map of String) Optional. Labels are the labels applied to the nodes in the node group.launch_template(Attributes) Optional. LaunchTemplate is the launch template used for the node group.node_role(String) Optional. NodeRole is the IAM role assumed by the nodes in the node group.request_spot_instances(Boolean) Optional. RequestSpotInstances indicates whether spot instances are requested for the node group.resource_tags(Map of String) Optional. ResourceTags are the tags applied to the resources created for the node group.spot_instance_types(List of String) Optional. SpotInstanceTypes are the instance types used for spot instances.subnets(List of String) Optional. Subnets are the subnets associated with the node group.tags(Map of String) Optional. Tags are the tags applied to the nodes in the node group.user_data(String) Optional. UserData is the user data script executed on the nodes.
Nested schema for spec.eks_config.node_groups.user_data
Required:
id(String)name(String)version(Number)
Nested schema for spec.authorization
Optional:
admin_teams(List of String) Optional. Groups of users that can perform operations as a cluster admin. A managed ClusterRoleBinding will be created to grant the cluster-admin ClusterRole to the groups. Up to ten admin groups can be provided. For more info on RBAC, see https://kubernetes.io/docs/reference/access-authn-authz/rbac/#user-facing-rolesadmin_users(List of String) Optional. Users that can perform operations as a cluster admin. A managed ClusterRoleBinding will be created to grant the cluster-admin ClusterRole to the users. Up to ten admin users can be provided. For more info on RBAC, see https://kubernetes.io/docs/reference/access-authn-authz/rbac/#user-facing-roles
Nested schema for spec.binary_authorization
Optional:
evaluation_mode(String) Define binary authorization properties here
Nested schema for spec.logging_config
Optional:
component_config(Attributes) Parameters that describe the Logging configuration in a cluster. (see below for nested schema )
Nested schema for spec.logging_config.component_config
Optional:
enable_components(List of String)
Nested schema for spec.monitoring_config
Optional:
managed_prometheus_config(Attributes) Enable SPC Kosmos Managed Service for Prometheus in the cluster. (see below for nested schema )managed_thanos_config(Attributes) Enable SPC Kosmos Managed Service for Thanos in the cluster. (see below for nested schema )
Nested schema for spec.monitoring_config.managed_prometheus_config
Optional:
enabled(Boolean)
Nested schema for spec.monitoring_config.managed_thanos_config
Optional:
enabled(Boolean)
Nested schema for spec.oidc_config
Optional:
issuer_uri(String) A JSON Web Token (JWT) issuer URI. issuer must start with https://.jwks(String) Optional. OIDC verification keys in JWKS format (RFC 7517). It contains a list of OIDC verification keys that can be used to verify OIDC JWTs. This field is required for cluster that doesn’t have a publicly available discovery endpoint. When provided, it will be directly used to verify the OIDC JWT asserted by the IDP. A base64-encoded string.