Kubernetes continues to expand as a critical infrastructure tool. With this expansion comes the need to provide access with guard-rails to app development and testing spaces.
Giving access to full Kubernetes clusters is less than ideal from a cost, management, and security perspective. Kubernetes sandbox technologies can be helpful in various scenarios, including automated and manual application testing, Kubernetes version upgrade testing, or in quickly testing a Helm chart. Sandbox is the best place to test version upgrades, new features, and supporting services before implementing them on production clusters. You can also use it to investigate how a Kubernetes cluster behaves under specific configurations, such as various Kubernetes versions, or in the presence or absence of custom resources.
There are many ways to run Kubernetes sandbox clusters, from local and simple to more complex cloud-based solutions. There is also the new and innovative concept of Kubernetes virtual clusters, which you can use as a sandboxed cluster within a cluster.
This article discusses the pros and cons of various Kubernetes sandbox solutions and provides short tutorials on how to get started.
This article will look at several ways to run sandboxed Kubernetes clusters.
Kubernetes sandbox solutions for local environments allow developers to run, test, and experiment with Kubernetes clusters on their own machines without needing a full-scale, cloud-based Kubernetes setup. These tools are essential for learning, development, and testing purposes.
Kubernetes-in-Docker (kind) is a solution that runs a local Kubernetes cluster using only Docker. It was developed to test Kubernetes but can also be used for local development.
To run kind, you’ll need the following:
Kind makes it easy to create a Kubernetes cluster on a local machine. By default, kind creates a single-node cluster, meaning the control and data planes are on the same node. It is possible to run more than one node, but it is outside the scope of this article.
There are some potential drawbacks worth mentioning for kind. First, you have to manually apply any add-ons using external manifest files. Unfortunately, this feature is not integrated into kind as other local Kubernetes distributions.
The second potential drawback is that kind does not support cluster autoscaling. This means you must be thoughtful about the number of nodes you need and will not be able to test any scenarios requiring auto-scaling. Depending on your testing scenarios, this may be a blocker.
Many organizations run Kind within their CI/CD to support automated testing. This typically creates a bottleneck and flakey test results because of scalability issues and reduced control over underlying infrastructure (runners). Using virtual clusters provided by Uffizzi is a way to dramatically reduce these issues and total time to run tests - often by a factor of 5x (see case study with ingress-nginx project).
The first step is to install kind as per the instructions for your system on their Quick Start page. Once kind is installed, you can verify by checking the version.
For this example and throughout the article, we will use the following example manifest file. Using your favorite editor, save the following file as test-app.yaml
The test-app.yaml contains three Kubernetes resources: a Deployment, Service, and Ingress resource. These three resources represent a straightforward application that launches an nginx container and allows you to send requests to it via the Ingress and Service resources.
Because we use an Ingress resource, we must create a kind configuration file to label the nodes properly.
Save the following file to kind-config.yaml
The next step is to create our kind cluster using this config file.
Verify kubectl works by querying the nodes.
It was noted earlier that add-ons must be installed via manifest files. The below step adds an Ingress controller to the cluster via a manifest file.
Verify the manifest was applied.
Apply our test-app.yaml file.
Verify the deployment works. You should get a Nginx Welcome page.
Once your experimentations are finished, you can delete the cluster like so:
This shows that even for essential things such as ingresses, using Kind requires additional configuration.
Minikube is another tool to run a Kubernetes cluster locally on your computer. It is older than kind, offers more functionality, and supports more backends. It creates only one node by default, but you can use command-line options to set the number of nodes.
Minikube manages addons straight from the command line, which means the addons will likely work out of the box. It supports a surprising collection of useful add-ons, for example:
However, Just like kind, minikube does not support auto-scaling. So, if you want to run any tests involving autoscaling of your cluster, minikube won’t be able to help you either.
{{banner-1="/utility-pages/banners"}}
To run a minikube cluster, you need the following:
Follow the instructions for installing minikube on your system here.
Once minikube is installed, verify it works by checking the version.
Next, create a simple cluster using the Docker backend:
After a couple of minutes, you will have a working cluster. Now let’s have a look at adding the ingress-nginx addon:
Again, wait a few minutes, and ingress-nginx is ready and working out of the box.
Let’s deploy and test our test app:
To test our app on the locahost we will need to run the following in a separate window and keep it running.
Now verify we can connect to our application.
Finally, delete the Minikube cluster:
Kind and Miniukbe tools are for local testing and are helpful initially. However, they fall short when compared to testing in a real-world setting. Since these tools typically operate on a single machine, they cannot thoroughly test configurations across multiple applications and servers. Their ingress capabilities are limited.
Local machines also lack the CPU, RAM, and disk resources of production servers, limiting their effectiveness in testing scalability. The localized nature of these environments also makes collaborative problem-solving more challenging. It would be difficult to share the local environment for collaborative efforts.
These solutions have their use cases, but let’s look at remote sandbox solutions as well.
Remote Kubernetes sandbox solutions provide a cloud-based environment where you can experiment with Kubernetes without having to set up and maintain the infrastructure on your local machine. These solutions are excellent for learning, testing, and even prototyping Kubernetes deployments.
You can run a temporary cluster straight on a major cloud vendor platform. The key here is to have a way to spin up a cluster easily.
The main advantage of this Kubernetes sandbox solution is that you can create a cluster as close as possible to your production cluster. This includes the same Kubernetes version, storage classes, network plugins, etc. Cloud sandbox clusters also support autoscaling, so running tests (such as stress tests) that involve the cluster automatically increasing and decreasing the number of nodes is now possible. Lastly, if you need to test specific features of a cloud vendor, this would be the only way to test those out.
The method also has some disadvantages. The first issue is that it will cost you money. It is also generally much slower to provision compared to a local cluster. It can also be much more complicated with a much wider range of configuration options, depending on the cloud vendor and how much customization you need. One could argue that it is less of a sandbox and just setting up a separate cluster.
Let’s look at how the main public cloud vendor does it.
Amazon Web Services (AWS) has a command-line tool that makes creating and managing Elastic Kubernetes Service (EKS) clusters quite easy: eksctl. Below, we give a short tutorial on how to get started.
Important: If you follow this tutorial, it will cost you money!
It also requires the following pre-requisites.
Ensure you have configured your AWS credentials correctly. Refer this help document to check. You should be able to run a command such as aws sts get-caller-identity successfully.
Also, note that eksctl is a single executable; the easiest way to ensure you run the latest version is to download it. Please follow the instructions on the official website.
Assuming you properly configured your AWS credentials, create a simple, temporary EKS cluster as shown.
Now you will have to wait about 15 minutes to get the cluster up and running. Eksctl will also update your kubeconfig file and give you direct access to the cluster:
Eksctl supports “add-ons,” but these are only EKS-specific components such as the EBS-backed CSI, VPC CNI, etc. These details would be specific to EKS or any other cloud vendor.
Finally, it is important to delete the cluster afterward. Again, this will take quite a while to finish.
This new technology allows you to run Kubernetes clusters inside an existing one. It becomes practical to run temporary sandbox clusters, especially when your use case requires more than a local environment for collaboration and integration with third-party services to mimic a product environment.
The obvious drawback is that you need to have a cluster already running, and it must be “big enough” to accommodate any temporary cluster that could run simultaneously.
Vendors can simplify the process by allowing you to create virtual clusters on their cloud platforms. Uffizzi is one such vendor. Uffizzi specializes in enabling self-service access to isolated environments that are 10x faster to spin up and 10x less expensive than operating a full cluster.
Let's look at how to create a virtual cluster using their system.
The first step is to install the Uffizzi command line tool—explained here. As per usual, you will need to have kubectl installed. You will also need an account at either GitHub or GitLab.
The next step is to log in to Uffizzi and create a new project, which is achieved by simply running the following command:
Creating a cluster is easy. Just run the following (replace “testcluster” by the chosen name of your cluster)
About a minute later, your cluster will be ready.
Before deploying our app, we must make one small change in its manifest file. Let’s change the ingressClassName from nginx to uffizzi. This will use the default uffizzi ingress class, which is practical because it allows you to access the ingress without having a domain name. Once you have made the change, apply the manifest file like so.
You will need to wait for the ingress to be actually created. The uffizzi ingress class will overwrite the hostname for the ingress to make it more practical. After a minute or so, you should be able to see this:
So let’s try to access this URL.
It works! Finally, let’s delete the cluster:
{{banner-2="/utility-pages/banners"}}
This article reviewed several ways you could run Kubernetes sandbox environments. We looked at kind and minikube as ways to locally test applications. These solutions can work well for testing deployments if they do not rely on any specific cloud components. They cost nothing and are quick to spin up and spin down.
We also considered using a cloud provider to spin up clusters separate from the production cluster. The benefits here are that we can get closer to the exact specs of the production environment. However, this comes with the major drawback that you are paying hourly to run your tests. If you forget to delete the cluster after your testing, this could incur significant costs.
Lastly, we looked at virtual clusters, specifically Uffizzi. You have the benefit of being able to temporarily spin up clusters, test your deployment, and spin them down automatically. They take many manual processes off of the end-users' hands. They also have the added benefit of running a virtual cluster within your already running cluster.
Ultimately, Kubernetes sandbox testing is important and is more likely to be done if it is straightforward to setup and manage.