MKS - Create and Import Cluster

Overview

This guide provides instructions for creating MKS (Managed Kubernetes Service) clusters in Kosmos on the SPC platform.

Prerequisites

Required tools:

Required access:

  • Kosmos access key: For authenticating with Kosmos console
  • SPC IAM account: With root or admin access for creating roles and policies
  • Network access:
    • Proxy exceptions for console.kosmos.spcplatform.com (if behind corporate proxy)
    • VPN access to appropriate environments.
  • VPC setup: with 2 or more subnet and its route and security group configuration

Network prerequisites:

Before creating a cluster, ensure you have:

  • A VPC with appropriate CIDR blocks
  • At least 2 public and 2 private subnets in different availability zones
  • Internet Gateway attached (for public clusters)
  • Route tables configured
  • Security groups with appropriate ingress/egress rules

Cluster Role requirements:

  • EKS service role – Used by the control plane to manage SPC resources (SPC mimics the EKS API)
  • Node instance role – Used by Compute worker nodes to interact with SPC resources

Example – Create MKS service and Node group roles

Create MKS service role:

SPC mimics the EKS API for portability with AWS.

Create service-role-trust.json:

cat > service-role-trust.json <<EOF
{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Action": "sts:AssumeRole",
      "Principal": {
        "Service": "eks.amazonaws.com"
      },
      "Effect": "Allow"
    }
  ]
}
EOF

Create the service role:

Create role:

aws iam create-role \
    --role-name mks-service-role \
    --assume-role-policy-document file://service-role-trust.json

Attach SPC managed policy:

# AmazonEKSClusterPolicy - SPC uses AWS-compatible EKS policies
aws iam attach-role-policy \
    --role-name mks-service-role \
    --policy-arn arn:aws:iam::aws:policy/AmazonEKSClusterPolicy

Create node role (If creating new clusters)

Create node-role-trust.json:

cat > node-role-trust.json <<EOF
{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Principal": {
        "Service": "ec2.amazonaws.com"
      },
      "Action": "sts:AssumeRole"
    }
  ]
}
EOF

Create the node role:

aws iam create-role \
    --role-name mksNodeRole \
    --assume-role-policy-document file://node-role-trust.json

Attach required policies:

# EKS policies for AWS compatibility - enables workload portability
aws iam attach-role-policy \
    --role-name mksNodeRole \
    --policy-arn arn:aws:iam::aws:policy/AmazonEKSWorkerNodePolicy

aws iam attach-role-policy \
    --role-name mksNodeRole \
    --policy-arn arn:aws:iam::aws:policy/AmazonEKS_CNI_Policy

aws iam attach-role-policy \
    --role-name mksNodeRole \
    --policy-arn arn:aws:iam::aws:policy/AmazonEC2ContainerRegistryReadOnly

Environment variables needed

Before starting, gather these values:

  • ${ACCOUNT_ID}: Your SPC account ID
  • ${ACCESS_KEY}: Your Kosmos access key
  • ${FLEET_ID}: Your Kosmos fleet identifier
  • ${CLUSTER_NAME}: Desired name for your cluster
  • ${REGION}: Target SPC region (e.g., us-east-1)
  • ${ADMIN_TEAM}: Kosmos team name from the console (grants cluster-admin access)
  • ${OWNER}: Owner identifier for the cluster

Set the environment values in current session:

ACCOUNT_ID=<Your-SPC-account-ID>
ACCESS_KEY=<Your-Kosmos-access-key>
FLEET_ID=<Your-Kosmos-fleet-identifier>
CLUSTER_NAME=<Desired-name-for-your-cluster>
REGION=<Target-SPC-region>
ADMIN_TEAM=<Your-team-name>
OWNER=<Owner-identifier-for-the-cluster>

Part 1 - Setup OIDC provider configuration

What is OIDC and why is it needed?

OpenID Connect (OIDC) is an identity layer built on top of OAuth 2.0 that allows one service to verify the identity of another through a trusted identity provider. Instead of sharing long-lived credentials, services exchange short-lived tokens that prove who they are.

How Kosmos uses OIDC to access SPC:

When Kosmos needs to manage resources in your SPC account (such as creating MKS nodes or updating cluster configurations), it generates a signed identity token. This token contains claims identifying the specific fleet making the request. SPC then validates this token against the OIDC provider you registered and, if trusted, issues temporary credentials.

Why we use the SSL certificate thumbprint:

When you register Kosmos as an OIDC provider in SPC, you provide a certificate thumbprint. This thumbprint tells SPC: “Trust tokens signed by the service presenting this certificate.” It ensures SPC only accepts tokens from the legitimate Kosmos identity provider, preventing impersonation.

Benefits of this approach:

  • No stored credentials — Kosmos never stores your SPC access keys
  • Short-lived access — Each operation uses fresh, temporary credentials
  • You control trust — Remove the OIDC provider anytime to revoke access

Register OIDC provider

You have two options for registering the OIDC provider:

Option A

Click to view: Using AWS console UI (Recommended)
  1. Login to SPC Console
  2. Navigate to: Identity & SecuritySPC IAMIdentity Providers
  3. Click + Add provider
  4. Enter the following:
    • Provider URL: https://console.kosmos.spcplatform.com/kosmos-oidc
    • Audience: kosmos-operator
  5. Click Get thumbprint
  6. Click Add provider

Option B

Click to view: Manual Configuration via CLI
  1. Get the certificate thumbprint:

Save the following script as get_oidc_fingerprint.sh:

#!/bin/bash
# get_oidc_fingerprint.sh - Extract OIDC provider certificate fingerprint
# Usage: ./get_oidc_fingerprint.sh [OIDC_URL]

OIDC_URL="${1:-https://console.kosmos.spcplatform.com/kosmos-oidc}"
HOST=$(echo "$OIDC_URL" | sed -E 's|https?://([^/:]+).*|\1|')

# Get certificate chain (</dev/null prevents command from hanging)
openssl s_client -servername "$HOST" -showcerts -connect "$HOST:443" </dev/null 2>/dev/null > certs_chain.txt

if [ ! -s certs_chain.txt ]; then
  echo "Failed to retrieve certificate chain."
  exit 1
fi

# Split certificate chain into individual files
awk 'BEGIN {cert=""; count=0}
     /BEGIN CERTIFICATE/ {cert=$0; next}
     /END CERTIFICATE/ {cert=cert "\n" $0; filename=sprintf("cert_%02d.crt", count++); print cert > filename; cert=""}
     {if (cert != "") cert=cert "\n" $0}' certs_chain.txt

# Find certificate matching the domain
for cert_file in cert_*.crt; do
  subject=$(openssl x509 -in "$cert_file" -noout -subject 2>/dev/null)
  altnames=$(openssl x509 -in "$cert_file" -noout -text 2>/dev/null | grep -A 1 "Subject Alternative Name")

  if echo "$subject" | grep -q "CN.*$HOST" || echo "$altnames" | grep -q "$HOST"; then
    FINGERPRINT=$(openssl x509 -in "$cert_file" -fingerprint -sha1 -noout | sed 's/://g' | awk -F= '{print $2}')
    FINGERPRINT=$(echo "$FINGERPRINT" | tr '[:upper:]' '[:lower:]')
    echo "$FINGERPRINT"
    rm -f certs_chain.txt cert_*.crt
    exit 0
  fi
done

echo "No matching certificate found."
rm -f certs_chain.txt cert_*.crt
exit 1

Run the script:

chmod +x get_oidc_fingerprint.sh
./get_oidc_fingerprint.sh

This outputs a 40-character SHA1 fingerprint (e.g., c940c37c6a3d3327385074008b7009cb5c8f84d6).

  1. Create OIDC provider configuration file (create-oidc-provider.json):
cat > create-oidc-provider.json <<EOF
{
  "Url": "https://console.kosmos.spcplatform.com/kosmos-oidc",
  "ClientIDList": ["kosmos-operator"],
  "ThumbprintList": ["YOUR_THUMBPRINT_FROM_ABOVE"]
}
EOF
  1. Create the OIDC provider: (its one to one to fleet)
aws iam create-open-id-connect-provider \
    --cli-input-json file://create-oidc-provider.json > oidc-provider-output.json

Save the returned OpenIDConnectProviderArn - you’ll need it next.

---

Part 2: IAM role and policy setup

IAM roles and policies define who can access what in a SPC environment. They enable secure, least-privilege access control for both users (humans) and services (applications, workloads, or other SPC resources).

  • Allow Kosmos components to interact securely with SPC services.

Step 1: Create trust entity for Kosmos role

Create trust-entity.json:

Example - Click to open – trust-entity.json file
cat > trust-entity.json <<EOF
{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Principal": {
        "Federated": "arn:aws:iam::${ACCOUNT_ID}:oidc-provider/console.kosmos.spcplatform.com/kosmos-oidc"
      },
      "Action": "sts:AssumeRoleWithWebIdentity",
      "Condition": {
        "StringEquals": {
          "console.kosmos.spcplatform.com/kosmos-oidc:sub": "fleet-${FLEET_ID}",
          "console.kosmos.spcplatform.com/kosmos-oidc:aud": "kosmos-operator"
        }
      }
    },
    {
      "Effect": "Allow",
      "Principal": {
        "AWS": ["$(aws sts get-caller-identity --query Arn --output text)"]
      },
      "Action": "sts:AssumeRole"
    }
  ]
}
EOF

Step 2: Create Kosmos operator role

aws iam create-role \
    --role-name kosmos-operator \
    --assume-role-policy-document file://trust-entity.json

Step 3: Create and attach policies

Create minimum MKS permissions policy for security purposes.

Note: This policy includes EKS permissions because SPC mimics the EKS API for AWS compatibility.

Create kosmos-mks-policy.json:

Click to open – kosmos-mks-policy.json
cat > kosmos-mks-policy.json <<EOF
{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Sid": "EC2Permissions",
      "Effect": "Allow",
      "Action": [
        "ec2:RunInstances",
        "ec2:RevokeSecurityGroupIngress",
        "ec2:RevokeSecurityGroupEgress",
        "ec2:DescribeInstanceTypes",
        "ec2:DescribeRegions",
        "ec2:DescribeVpcs",
        "ec2:DescribeTags",
        "ec2:DescribeSubnets",
        "ec2:DescribeSecurityGroups",
        "ec2:DescribeRouteTables",
        "ec2:DescribeLaunchTemplateVersions",
        "ec2:DescribeLaunchTemplates",
        "ec2:DescribeKeyPairs",
        "ec2:DescribeInternetGateways",
        "ec2:DescribeImages",
        "ec2:DescribeAvailabilityZones",
        "ec2:DescribeAccountAttributes",
        "ec2:DeleteTags",
        "ec2:DeleteLaunchTemplateVersions",
        "ec2:DeleteLaunchTemplate",
        "ec2:DeleteSecurityGroup",
        "ec2:DeleteKeyPair",
        "ec2:CreateTags",
        "ec2:CreateSecurityGroup",
        "ec2:CreateLaunchTemplateVersion",
        "ec2:CreateLaunchTemplate",
        "ec2:CreateKeyPair",
        "ec2:AuthorizeSecurityGroupIngress",
        "ec2:AuthorizeSecurityGroupEgress"
      ],
      "Resource": "*"
    },
    {
      "Sid": "IAMPermissions",
      "Effect": "Allow",
      "Action": [
        "iam:PassRole",
        "iam:ListRoles",
        "iam:ListRoleTags",
        "iam:ListInstanceProfilesForRole",
        "iam:ListInstanceProfiles",
        "iam:ListAttachedRolePolicies",
        "iam:GetRole",
        "iam:GetInstanceProfile",
        "iam:DetachRolePolicy",
        "iam:DeleteRole",
        "iam:CreateRole",
        "iam:AttachRolePolicy",
        "iam:AddRoleToInstanceProfile",
        "iam:CreateInstanceProfile",
        "iam:CreateServiceLinkedRole",
        "iam:DeleteInstanceProfile",
        "iam:RemoveRoleFromInstanceProfile"
      ],
      "Resource": "*"
    },
    {
      "Sid": "KMSPermissions",
      "Effect": "Allow",
      "Action": "kms:ListKeys",
      "Resource": "*"
    },
    {
      "Sid": "EKSPermissions",
      "Effect": "Allow",
      "Action": [
        "eks:UpdateNodegroupVersion",
        "eks:UpdateNodegroupConfig",
        "eks:UpdateClusterVersion",
        "eks:UpdateClusterConfig",
        "eks:UntagResource",
        "eks:TagResource",
        "eks:ListUpdates",
        "eks:ListTagsForResource",
        "eks:ListNodegroups",
        "eks:ListFargateProfiles",
        "eks:ListClusters",
        "eks:DescribeUpdate",
        "eks:DescribeNodegroup",
        "eks:DescribeFargateProfile",
        "eks:DescribeCluster",
        "eks:DeleteNodegroup",
        "eks:DeleteFargateProfile",
        "eks:DeleteCluster",
        "eks:CreateNodegroup",
        "eks:CreateFargateProfile",
        "eks:CreateCluster"
      ],
      "Resource": "*"
    },
    {
      "Sid": "VPCPermissions",
      "Effect": "Allow",
      "Action": [
        "ec2:ReplaceRoute",
        "ec2:ModifyVpcAttribute",
        "ec2:ModifySubnetAttribute",
        "ec2:DisassociateRouteTable",
        "ec2:DetachInternetGateway",
        "ec2:DescribeVpcs",
        "ec2:DeleteVpc",
        "ec2:DeleteSubnet",
        "ec2:DeleteRouteTable",
        "ec2:DeleteRoute",
        "ec2:DeleteInternetGateway",
        "ec2:CreateVpc",
        "ec2:CreateSubnet",
        "ec2:CreateRouteTable",
        "ec2:CreateRoute",
        "ec2:CreateInternetGateway",
        "ec2:AttachInternetGateway",
        "ec2:AssociateRouteTable"
      ],
      "Resource": "*"
    }
  ]
}
EOF

Create and attach the policy:

Create the policy:

Click to view CLI command to create and attach Kosmos operator policy
aws iam create-policy \
    --policy-name kosmos-operator-policy \
    --policy-document file://kosmos-mks-policy.json

Attach to role:

aws iam attach-role-policy \
    --role-name kosmos-operator \
    --policy-arn arn:aws:iam::${ACCOUNT_ID}:policy/kosmos-operator-policy

Part 3: Creating a MKS cluster

Step 3.1: Login to Kosmos operator

kosmos login console.kosmos.spcplatform.com --access-key ${ACCESS_KEY}

Step 3.2: Prepare cluster configuration

Create mks-cluster-config.yaml:

Important: Expand the section below and carefully review the YAML configuration. You must replace the placeholder values for subnets, publicAccessSources, and other environment-specific fields before creating the cluster.

Click to open – mks-cluster-config.yaml file
cat > mks-cluster-config.yaml <<EOF
apiVersion: storage.kosmos.spcplatform.com/v1
kind: MKSCluster
metadata:
  labels:
    app.kubernetes.io/name: ${CLUSTER_NAME}
    app.kubernetes.io/instance: ${FLEET_ID}
    app.kubernetes.io/part-of: kosmos
    app.kubernetes.io/managed-by: kustomize
    app.kubernetes.io/created-by: kosmos
  name: ${CLUSTER_NAME}
  namespace: ${FLEET_ID}
spec:
  # displayName:
  description: "MKS cluster created via Kosmos CLI"
  authorization:
    adminUsers: [${OWNER}] # Ensure this is a list
    owner: ${OWNER}
  mksConfig:
    clusterRole: "mks-service-role"  # MKS service role name created in Prerequisites
    kubernetesVersion: "1.32" # Replace value with the actual K8S version
    publicAccess: true
    privateAccess: true
    kosmosRoleArn: "arn:aws:iam::${ACCOUNT_ID}:role/kosmos-operator"
    displayName: "${CLUSTER_NAME}-display"
    region: "${REGION}"
    loggingTypes:
      ["api", "audit", "authenticator", "controllerManager", "scheduler"]
    secretsEncryption: false
    tags:
      Environment: "prd"
      ManagedBy: "kosmos"
    subnets:
      - "subnet-xxxxxxxxxxxxxxxxx"  # Replace with your subnet ID (public subnet 1)
      - "subnet-xxxxxxxxxxxxxxxxx"  # Replace with your subnet ID (public subnet 2)
      - "subnet-xxxxxxxxxxxxxxxxx"  # Replace with your subnet ID (private subnet 1)
      - "subnet-xxxxxxxxxxxxxxxxx"  # Replace with your subnet ID (private subnet 2)
    securityGroups: [] # Optional: specific security groups
    publicAccessSources:
      - "10.0.0.1/32"  # Replace with your actual public IP
      - "10.0.0.2/32"  # Replace with additional IPs as needed
    ebsCSIDriver: true # Enable if using EBS volumes
    imported: false
    nodeGroups:
      - nodegroupName: "${CLUSTER_NAME}-nodes" # Replace value with your actual node group name
        nodeRole: "mksNodeRole" # Node role name created in Prerequisites
        resourceTags:
          Environment: "prd"
        diskSize: 50  # Replace value with your desired disk size
        instanceType: "m5.xlarge" # Replace value with your desired instance type
        version: "1.32" # This value must match the above kubernetes version
        minSize: 1
        maxSize: 5
        desiredSize: 1
        gpu: false
        subnets: [] # Uses cluster subnets if empty
        tags:
          NodeGroup: "primary"
        labels:
          workload: "general"
        requestSpotInstances: false
EOF

Note: Please do not forget to replace your publicAccessSources and subnets in the above YAML file. Look for:

  • Replace the above with your actual subnet IDs
  • Add your IP here
  • Optional: specific security groups

Tip: Use kosmos create mks --skeleton to see the correct YAML format.

Step 3.3: Create the Cluster

Create the cluster:

kosmos create mks -f mks-cluster-config.yaml

Monitor creation status:

kosmos list mks --fleet ${FLEET_ID}

The cluster will show status “connecting” initially.

Connect the cluster to Kosmos

From an environment with cluster access:

Assume the Kosmos role:

aws sts assume-role \
    --role-arn arn:aws:iam::${ACCOUNT_ID}:role/kosmos-operator \
    --role-session-name kosmos-session \
    > assume-role-output.json

Export credentials needed for assuming the role:

export AWS_ACCESS_KEY_ID=$(jq -r '.Credentials.AccessKeyId' assume-role-output.json)
export AWS_SECRET_ACCESS_KEY=$(jq -r '.Credentials.SecretAccessKey' assume-role-output.json)
export AWS_SESSION_TOKEN=$(jq -r '.Credentials.SessionToken' assume-role-output.json)

Update kubeconfig:

# SPC mimics the EKS API - 'aws eks' commands work seamlessly
aws eks update-kubeconfig --name ${CLUSTER_NAME} --region ${REGION}

Connect cluster to Kosmos:

kosmos join cluster ${CLUSTER_NAME} --fleet ${FLEET_ID}

Part 4: Importing an existing MKS cluster

Step 4.1 Prerequisites

Step 4.2: Prepare import configuration

For existing clusters, create mks-import-config.yaml:

cat > mks-import-config.yaml <<EOF
apiVersion: storage.kosmos.spcplatform.com/v1
kind: MKSCluster
metadata:
  labels:
    app.kubernetes.io/name: ${CLUSTER_NAME}
    app.kubernetes.io/instance: ${FLEET_ID}
    app.kubernetes.io/part-of: kosmos
    app.kubernetes.io/managed-by: kustomize
    app.kubernetes.io/created-by: kosmos
  name: ${CLUSTER_NAME}
  namespace: ${FLEET_ID}
spec:
  name: ${CLUSTER_NAME}
  description: "Existing MKS cluster imported to Kosmos"
  authorization:
    adminTeams: ${ADMIN_TEAM}
    adminUsers: ["${OWNER}"]  # Replace with your admin usernames
    owner: ${OWNER}
  mksConfig:
    displayName: "${CLUSTER_NAME}-imported"
    region: "${REGION}"
    imported: true # Key difference for imports
    kosmosRoleArn: "arn:aws:iam::${ACCOUNT_ID}:role/kosmos-operator"
    publicAccessSources:
      - "10.0.0.1/32"  # Replace with your actual public IP
    kubernetesVersion: "1.32" # Match existing cluster version
EOF

Step 4.3: Import the cluster

Pre-requisite: Ensure you have the cluster name and region of the existing MKS cluster as well as the kosmos-operator-role ARN created earlier in step 4 .

Verify cluster exists:

# SPC mimics the EKS API for AWS compatibility
aws eks describe-cluster --name ${CLUSTER_NAME} --region ${REGION}

Import cluster to Kosmos:

kosmos create mks --file mks-import-config.yaml

Check status:

kosmos list mks --fleet ${FLEET_ID}

Step 4.4: Connect to imported cluster

Update kubeconfig for existing cluster:

Note: The --name parameter must match the displayName from your import configuration (e.g., ${CLUSTER_NAME}-imported), not the metadata.name. SPC/AWS uses the displayName as the actual cluster identifier.

aws eks update-kubeconfig --name ${CLUSTER_NAME}-imported --region ${REGION}

Connect to Kosmos:

kosmos join cluster ${CLUSTER_NAME} --fleet ${FLEET_ID}

Part 5: Validation and usage

Step 5.1: Validate cluster access

Open a new terminal session:

# Login to Kosmos
kosmos login https://console.kosmos.spcplatform.com/

# Switch to your cluster context
kosmos use cluster ${CLUSTER_NAME} --fleet ${FLEET_ID}

# Test cluster access
kubectl get namespaces
kubectl get nodes
kubectl get pods --all-namespaces

Step 5.2: Verify cluster health

# Check node status
kubectl get nodes -o wide

# Check system pods
kubectl get pods -n kube-system

# Check Kosmos agent
kubectl get pods -n vcluster-platform

# Review cluster info
kubectl cluster-info

Install an application to test

Create nginx deployment

kubectl create deployment nginx-hello --image=nginx --port=80

Expose as a service

kubectl expose deployment nginx-hello --type=LoadBalancer --port=80

Check deployment status

kubectl get deployment nginx-hello
kubectl get pods -l app=nginx-hello
kubectl get svc nginx-hello

Get the service details

kubectl get svc nginx-hello -w

Clean up when done

kubectl delete deployment nginx-hello
kubectl delete service nginx-hello

Part 6: Cleanup and Teardown

Click to expand cleanup instructions

This section covers how to remove resources created in this guide.

Step 6.1: Remove cluster from Kosmos

Delete the MKS cluster:

kosmos delete mks --name ${CLUSTER_NAME} --fleet ${FLEET_ID}

Verify removal:

kosmos list mks --fleet ${FLEET_ID}

Step 6.2: Delete IAM resources (optional)

If you no longer need the Kosmos IAM roles and policies:

Detach and delete Kosmos operator policy:

aws iam detach-role-policy \
    --role-name kosmos-operator \
    --policy-arn arn:aws:iam::${ACCOUNT_ID}:policy/kosmos-operator-policy

aws iam delete-policy \
    --policy-arn arn:aws:iam::${ACCOUNT_ID}:policy/kosmos-operator-policy

Delete the Kosmos operator role:

aws iam delete-role --role-name kosmos-operator

Delete node and service roles (if created for this cluster):

# Detach policies from node role
aws iam detach-role-policy \
    --role-name mksNodeRole \
    --policy-arn arn:aws:iam::aws:policy/AmazonEKSWorkerNodePolicy

aws iam detach-role-policy \
    --role-name mksNodeRole \
    --policy-arn arn:aws:iam::aws:policy/AmazonEKS_CNI_Policy

aws iam detach-role-policy \
    --role-name mksNodeRole \
    --policy-arn arn:aws:iam::aws:policy/AmazonEC2ContainerRegistryReadOnly

aws iam delete-role --role-name mksNodeRole

# Detach policies from service role
aws iam detach-role-policy \
    --role-name mks-service-role \
    --policy-arn arn:aws:iam::aws:policy/AmazonEKSClusterPolicy

aws iam delete-role --role-name mks-service-role

Step 6.3: Remove OIDC provider (optional)

If you no longer need Kosmos to access your SPC account:

aws iam delete-open-id-connect-provider \
    --open-id-connect-provider-arn arn:aws:iam::${ACCOUNT_ID}:oidc-provider/console.kosmos.spcplatform.com/kosmos-oidc

Warning: Removing the OIDC provider will break all Kosmos-managed clusters in this account. Only do this if you are fully decommissioning Kosmos.


Troubleshooting

Common issues and solutions

1. Cluster stuck in “Connecting” status

  • Verify OIDC provider is correctly configured
  • Check IAM role trust relationships
  • Ensure network connectivity to cluster API endpoint
  • Verify Kosmos agent deployment: kubectl get pods -n kosmos-system

2. Authentication errors

  • Verify role ARN in cluster configuration matches created role
  • Check fleet ID in trust relationship matches your fleet
  • Ensure assumed role session is not expired

3. Network access issues

  • Confirm your IP is in publicAccessSources
  • Check security group rules allow required ports
  • Verify VPN connection if required

4. Node group creation failures

  • Verify node role has required policies attached
  • Check subnet availability zones match
  • Ensure instance type is available in region
  • Verify disk size meets minimum requirements

Notes

  • Replace all ${VARIABLE} placeholders with actual values
  • The SPC platform appears to use AWS-compatible APIs
  • VPN requirements may vary based on your organization’s setup
  • Some features may require additional licensing or permissions

Edit this page on GitHub