Introduction
Setting up RAID0 storage on CMK (Crusoe Managed Kubernetes) can significantly enhance I/O performance by striping data across multiple NVMe devices. This guide walks you through deploying a DaemonSet that automatically configures RAID0 on nodes and mounts it at /mnt/data
.
Prerequisites
Before proceeding, ensure you have:
-
A running CMK cluster with worker nodes having multiple NVMe drives.
-
kubectl
installed and configured to interact with the cluster. -
Familiarity with Kubernetes DaemonSets and
hostPath
volumes.
Step-by-Step Instructions
Step 1: Deploy the RAID0 Setup DaemonSet
Create a DaemonSet that configures RAID0 on each node and mounts it at /mnt/data
.
Apply the following YAML file:
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: raid-setup
namespace: kube-system
spec:
selector:
matchLabels:
name: raid-setup
template:
metadata:
labels:
name: raid-setup
spec:
hostPID: true
hostNetwork: true
containers:
- name: raid-setup
image: ubuntu:20.04
securityContext:
privileged: true
command: ["/bin/bash", "-c"]
args:
- |
set -euo pipefail
echo "Starting RAID setup script..."
# Remove Fluent Bit repository file to avoid GPG errors
echo "Removing Fluent Bit repository file..."
rm -f /etc/apt/sources.list.d/fluent-bit.list
# Update and install required packages
echo "Updating apt repositories..."
apt-get update --allow-insecure-repositories
echo "Installing nvme-cli, mdadm, and gawk..."
apt-get install -y nvme-cli mdadm gawk
# Count NVMe devices
num_nvme=$(nvme list | grep -i /dev | wc -l)
echo "Number of NVMe devices: $num_nvme"
if [[ $num_nvme -eq 1 ]]; then
dev_name=$(nvme list | grep -i /dev | awk '{print $1}')
echo "Only one NVMe device detected: ${dev_name}"
mkfs.ext4 -F ${dev_name}
mkdir -p /mnt/data
mount -t ext4 ${dev_name} /mnt/data || { echo "Mount failed for ${dev_name}"; exit 1; }
elif [[ $num_nvme -gt 1 ]]; then
dev_names=$(nvme list | grep -i /dev | awk '{print $1}')
echo "Multiple NVMe devices detected: ${dev_names}"
mdadm --create --verbose /dev/md127 --level=0 --raid-devices=${num_nvme} ${dev_names} --force
sleep 5
mkfs.ext4 -F /dev/md127
mkdir -p /mnt/data
mount -t ext4 /dev/md127 /mnt/data || { echo "Mount failed for RAID device"; exit 1; }
else
echo "No ephemeral drives detected" && exit 1
fi
echo "RAID setup completed successfully. Entering sleep..."
exec sleep infinity
volumeMounts:
- name: host-dev
mountPath: /dev
- name: host-etc
mountPath: /etc
- name: data-mount
mountPath: /mnt/data
mountPropagation: Bidirectional
volumes:
- name: host-dev
hostPath:
path: /dev
- name: host-etc
hostPath:
path: /etc
- name: data-mount
hostPath:
path: /mnt/data
type: DirectoryOrCreate
Apply the DaemonSet:
kubectl apply -f raid-setup.yaml
Step 2: Verify RAID0 Setup
Check if the RAID0 device is mounted properly:
kubectl get pods -n kube-system | grep raid-setup
SSH into a node running the DaemonSet and verify the mount:
df -h | grep /mnt/data
Step 3: Test RAID0 Storage with a Pod
To confirm that pods can access the RAID0 storage, create a test pod:
apiVersion: v1
kind: Pod
metadata:
name: test-raid0
spec:
containers:
- name: busybox
image: busybox
command: [ "sleep", "infinity" ]
volumeMounts:
- name: raid-storage
mountPath: /mnt/data
volumes:
- name: raid-storage
hostPath:
path: /mnt/data
type: Directory
Apply the pod configuration:
kubectl apply -f test-raid0.yaml
Once the pod is running, exec into it and check the mounted volume:
kubectl exec -it test-raid0 -- sh
/ # df -h /mnt/data
Filesystem Size Used Available Use% Mounted on
/dev/md127 6.9T 28.0K 6.6T 0% /mnt/data
Example
Assume you have a CMK cluster with two NVMe devices per node. After deploying the DaemonSet:
-
The
mdadm
command combines the devices into a RAID0 array. -
The array is formatted and mounted at
/mnt/data
. -
A pod mounting
/mnt/data
can now read/write to the RAID0 volume.
Troubleshooting
RAID0 Not Mounting on Nodes
Run the following command to check for errors:
kubectl logs -n kube-system daemonset/raid-setup
SSH into the node and inspect the RAID setup with:
cat /proc/mdstat
lsblk
RAID0 Disappears After Reboot
Ensure the RAID array is properly assembled on boot:
mdadm --detail --scan >> /etc/mdadm/mdadm.conf
update-initramfs -u
Comments
0 comments
Please sign in to leave a comment.