This README includes:
- An installation guide for Nova on KIND clusters. The scripts in this repo will allow you to create a sandbox environment for using Nova's trial version (for managing up to 6 workload clusters). If you are interested in using the full version, please contact us at info@elotl.co
- Tutorials that walk you through the core functionalities of Nova.
We love feedback, so please feel free to ask questions by creating an issue in this repo, joining our Slack: Elotl Free Trial or writing to us at info@elotl.co
You should have:
- Installed and running Docker (tested on version
27.0.3
) - Installed Kind (tested on version
0.21.0
) - Installed kubectl (tested on client version
v1.31.0
) - Installed jq (tested on version
1.7
) - Installed envsubst (tested on version
0.22.4
)
Please note that Nova on KIND is tested on:
- Mac OS Version 13.6
- Ubuntu Version 22.04.1
In some Linux environments, the default inotify resource configuration might not allow you to create sufficient Kind clusters to successfully install Nova. View more about why this is needed here
To increase these inotify limits, edit the file /etc/sysctl.conf
and add these lines:
fs.inotify.max_user_watches = 524288
fs.inotify.max_user_instances = 512
Use the following command to load the new sysctl settings:
sudo sysctl -p
Ensure these variables have been set correctly by using these commands:
sysctl -n fs.inotify.max_user_watches
sysctl -n fs.inotify.max_user_instances
novactl
is our CLI that allows you to easily create new Nova Control Planes, register new Nova Workload Clusters, check the health of your Nova cluster, and more!
curl -s https://api.github.com/repos/elotl/novactl/releases/latest | \
jq -r '.assets[].browser_download_url' | \
grep "$(uname -s | tr '[:upper:]' '[:lower:]')-$(uname -m | sed 's/x86_64/amd64/;s/i386/386/;s/aarch64/arm64/')" | \
xargs -I {} curl -L {} -o novactl
Once you have the binary, run:
chmod +x novactl
The following is an example to install the plugin in /usr/local/bin
for Unix-like operating systems:
sudo mv novactl /usr/local/bin/novactl
novactl
is ready to work as a kubectl plugin. Our docs assume you're using novactl
as kubectl plugin. To make this work, simply run:
sudo novactl kubectl-install
Make sure you have the expected novactl
version installed. If you're expecting to use v0.9.0
this is how you can check:
kubectl nova --version
kubectl-nova version v0.9.0 (git: 58407116) built: 20240312092623
Navigate to the root of the repository.
This script will allow you to create and configure 3 kind clusters - one of them will be the Nova Control Plane and the other two will be Nova workload clusters.
export NOVA_NAMESPACE=elotl
export NOVA_CONTROLPLANE_CONTEXT=nova
export K8S_CLUSTER_CONTEXT_1=k8s-cluster-1
export K8S_CLUSTER_CONTEXT_2=k8s-cluster-2
export K8S_HOSTING_CLUSTER_CONTEXT=kind-hosting-cluster
export NOVA_WORKLOAD_CLUSTER_1=kind-wlc-1
export NOVA_WORKLOAD_CLUSTER_2=kind-wlc-2
export K8S_HOSTING_CLUSTER=hosting-cluster
./scripts/setup_trial_env_on_kind.sh
Once installation finishes, you can use the following command to export Nova Control Plane kubeconfig as well as the kubeconfig of the hosting (or management) cluster and the workload clusters:
export KUBECONFIG=$HOME/.nova/nova/nova-kubeconfig:$PWD/kubeconfig-cp:$PWD/kubeconfig-workload-1:$PWD/kubeconfig-workload-2
This gives you access to Nova Control Plane (${NOVA_CONTROLPLANE_CONTEXT}
context), cluster hosting Nova Control Plane (context kind-${K8S_HOSTING_CLUSTER}
) and two workload clusters (context kind-${NOVA_WORKLOAD_CLUSTER_1}
and kind-${NOVA_WORKLOAD_CLUSTER_1}
)
To interact with the Nova control plane, use --context=${NOVA_CONTROLPLANE_CONTEXT}
flag in kubectl commands, e.g.:
kubectl --context=${NOVA_CONTROLPLANE_CONTEXT} get clusters
NAME K8S-VERSION K8S-CLUSTER REGION ZONE READY IDLE STANDBY
wlc-1 1.28 wlc-1 True True False
wlc-2 1.28 wlc-2 True True False
Optional
You may rename Kubernetes contexts, if you want to give them more meaningful names, as follows:
kubectl config rename-context "kind-${K8S_HOSTING_CLUSTER}" ${K8S_HOSTING_CLUSTER_CONTEXT}
kubectl config rename-context "kind-${NOVA_WORKLOAD_CLUSTER_1}" ${K8S_CLUSTER_CONTEXT_1}
kubectl config rename-context "kind-${NOVA_WORKLOAD_CLUSTER_2}" ${K8S_CLUSTER_CONTEXT_2}
- Annotation-based Scheduling
- Policy-based Scheduling
- Capacity-based Scheduling
- Spread Scheduling
- Just In Time Clusters
./scripts/teardown_kind_cluster.sh
If you'd like to try Nova on the cloud (AWS, GCP, Azure, OCI, on-prem), please grab free trial bits at https://www.elotl.co/nova-free-trial.html