MinIO is a popular open source object storage service that exposes an S3-compatible endpoint. It was initially available as a tiny binary written in Golang that could turn any directory on the host file system into an object storage endpoint during its early days. Eventually, it got containerized and supported Docker Engine. Earlier this year, MinIO transformed into a highly scalable, performant, cloud native, multi-tenant object storage engine with tight integration with Kubernetes.

This article will explore how MinIO takes advantage of Kubernetes to deliver cloud native object storage service in a multitenant mode.

I am currently using MinIO on Rancher RKE2 as the model registry for Nvidia Triton Inference Server. It is an advanced use case where Triton stores and retrieves machine learning models to perform inference. In the upcoming tutorials, I will walk you through the steps of installing MinIO on RKE2 and configuring the Triton Inference Server to leverage MinIO as the model registry at the edge.

MinIO Kubernetes Operator

There are three components to the MinIO operator — the control plane, CLI client, and the console. The control plane can be accessed through the CLI, which is available as an extension to kubectl, or through GUI exposed as a Kubernetes service.

Where Kubernetes sits in the stack.

Installation of the operator deploys a single CRD called This CRD contains the specifications for defining the tenants. Each tenant runs a secure, isolated MinIO instance on the same Kubernetes clusters but in different namespaces. We will take a closer look at the tenants in the next section.

Installing the operator is straightforward — download and install the binary that integrates with kubectl and initializes the control plane.

wget -O kubectl-minio
chmod +x kubectl-minio
mv kubectl-minio /usr/local/bin/
kubectl minio init

command line view of installing the Minio operator

The operator is deployed in a dedicated namespace, minio-operator.

Deploying the operator in its own namespace

As we can see, it has a couple of deployments and services that expose the console and the API for the client.

We can access the console by running the proxy which also shows the JWT security token.

The JWT token for the proxy

Operator login screen

Minio operator GUI

At this point, we have the operator configured but don’t have the tenants yet. Before we can provision the tenants, we need to configure the storage backend for MinIO.

Storage Considerations for MinIO

Technically, MinIO is yet another stateful workload running on Kubernetes. It relies on the same primitives such as a storage class, persistent volumes, and volume claims for persistence.

The only prerequisite for MinIO is a storage class that supports dynamic provisioning. When running on bare metal servers with high-performance SSD and NVMe disks, MinIO recommends using its own CSI driver called DirectCSI. Unlike the local volume provider and host path volumes, DirectCSI supports dynamic provisioning and also tiered volumes for hot, cold, and warm access tiers. You can point DirectCSI to the disks attached to the nodes of the cluster to create dedicated drives. It formats the drives and prepares them to be used with the storage class. I will cover the architecture and configuration of DirectCSI in the upcoming tutorial.

For building the model registry, I went with Rancher’s Local Path provisioner, which supports dynamic provisioning. You can use any overlay storage with support for WaitForFirstConsumer binding.

Configuring the Tenants

Think of each tenant as a dedicated S3 account that can contain one or more buckets. You can create a tenant by running the following command:

kubectl minio tenant create tenant1 \
--servers 1 \
--volumes 4 \
--capacity 1Gi \
--namespace tenant1
--storage-class local-path

49783b0c minio 5 ProxyEgg How MinIO Brings Object Storage Service to Kubernetes – The New Stack -

Creating a tenant results in provisioning the PV and PVCs and deploying a statefulset. Since we mentioned the maximum capacity for the tenant as 1GiB spread across four volumes, MinIO created 4 PVs, 4 PVCs, and one statefulset.

command line for getting a persistent volume

Info on tenant 1 persistent volume

command line highlights of tenant1

Accessing the console confirms the creation of the tenant.

Minio GUI of Tenant 1

MinIO clients can access the API at the minio endpoint exposed in the tenant1 namespace.

cluster access to Minio storage

In the next part of this series of running AI inference on RKE2, I will cover the end-to-end installation of MinIO and consuming the object storage service. Stay tuned.

The New Stack is a wholly owned subsidiary of Insight Partners, an investor in the following companies mentioned in this article: Docker.