Introduction
Setting up a Kubernetes cluster in a non-default VPC on Crusoe Cloud requires additional network configuration to ensure proper connectivity. This guide outlines how to create a Kubernetes cluster using Terraform, CLI, and UI, and details the necessary firewall rules to maintain communication within the VPC.
Prerequisites:
- Existing Non Default VPC and Subnets.
- Access and necessary permissions for Managed Orchestration.
- Installed and authenticated credentials for Crusoe CLI.
Firewall Rules for Non-Default VPC
Note: Crusoe Cloud does not create these rules by default in a NON-DEFAULT VPC. Additionally, the 'Destination Resource' for rules may not be visible via the UI, as control plane nodes are fully managed.
Both Ingress and Egress rules are required to ensure connectivity between the CMK cluster is intact. Any change in these rules will result in failing connectivity, with symptoms such as connection resets and connection timeouts.
To enable connectivity for a CMK cluster in a non-default VPC, configure the following firewall rules before proceeding with Creating a CMK cluster:
Ingress Rule
-
VPC Network:
<VPC-Name>
-
Name:
<FirewallRule-Name>
-
Direction: Ingress
-
Action: Allow
-
Protocols: TCP, UDP
-
Source Ports: All (
*
) -
Source:
<VPC-Name>
-
Destination Ports: All (
*
) -
Destination:
<VPC-Name>
Egress Rule
-
VPC Network:
<VPC-Name>
-
Name:
<FirewallRule-Name>
-
Direction: Egress
-
Action: Allow
-
Protocols: TCP, UDP
-
Source Ports: All (
*
) -
Source:
<VPC-Name>
-
Destination Ports: All (
*
) -
Destination:
0.0.0.0/0
Setting Up CMK in a Non-Default VPC
Using the Crusoe Cloud UI
-
Visit the Crusoe Cloud console.
-
Click the Orchestration tab in the left navigation.
-
Click the Create Cluster button.
-
Follow the UI flow to input all required elements.
-
(Optional) Specify the Service and Pod network CIDRs for Cilium and select any additional add-ons to deploy into the cluster.
-
Click the Create button.
Creating a Node Pool via UI
-
Visit the Crusoe Cloud console.
-
Click the Orchestration tab in the left navigation.
-
Select the cluster you want to edit.
-
Click the Create Node Pool button.
-
Fill out the required fields specifying the type and count of nodes.
-
Click the Create button.
Using the CLI
Creating a Kubernetes Cluster
crusoe kubernetes clusters create \
--name my-first-cluster \
--cluster-version 1.30.8-cmk.18 \
--location us-east1-a \
--subnet-id 6f8e2a1b-7b1d-4c8e-a9f2-8e3d6c1f2a0c \
--add-ons "nvidia_gpu_operator,nvidia_network_operator,crusoe_csi,cluster_autoscaler"
Creating a Node Pool
crusoe kubernetes nodepools create \
--name my-first-nodepool \
--cluster-id 6f8e2a1b-7b1d-4c8e-a9f2-8e3d6c1f2a0c \
--type h100-80gb-sxm-ib.8x \
--count 4 \
--ib-partition-id 4c8e2a1b-7b1d-4c8e-a9f2-8e3d6c1f2a0c
Using Terraform
terraform {
required_providers {
crusoe = {
source = "crusoecloud/crusoe"
}
}
}
locals {
my_ssh_key = file("~/.ssh/id_ed25519.pub") # Replace with the actual path to your public SSH key
}
resource "crusoe_kubernetes_cluster" "my_first_cluster" {
name = "demo"
version = "1.30.8-cmk.18"
location = "eu-iceland1-a"
subnet_id = "5275ba24-f60a-4fd7-9568-ba43ab6f1e96"
add_ons = ["nvidia_gpu_operator","nvidia_network_operator","crusoe_csi","cluster_autoscaler"]
}
resource "crusoe_kubernetes_node_pool" "s1a_nodepool" {
name = "demo-nodepool"
cluster_id = crusoe_kubernetes_cluster.my_first_cluster.id
instance_count = "2"
type = "s1a.40x"
ssh_key = local.my_ssh_key
}
Conclusion
By following these steps, you can successfully deploy a Kubernetes cluster in a non-default VPC using Terraform, CLI, or UI. Ensuring that the correct firewall rules are configured is critical to maintaining connectivity within the cluster. If you experience connectivity issues, verify that the firewall rules are properly applied and that they allow intra-VPC communication.
References:
Comments
0 comments
Please sign in to leave a comment.