This Kubernetes Operator is made to easily deploy SeaweedFS onto your Kubernetes cluster.
The operator manages the complete SeaweedFS infrastructure on Kubernetes, including Master servers, Volume servers, Filer services, and IAM (Identity and Access Management) services. This provides a scalable, resilient distributed file system with S3-compatible API and built-in authentication.
The difference to seaweedfs-csi-driver is that the infrastructure (SeaweedFS) itself runs on Kubernetes as well (Master, Filer, Volume-Servers) and can as such easily scale with it as you need. It is also by far more resilent to failures then a simple systemD service in regards to handling crashing services or accidental deletes.
By using make deploy it will deploy a Resource of type 'Seaweed' onto your current kubectl $KUBECONFIG target (the operator itself) which by default will do nothing unless you configurate it (see examples in config/samples/).
Goals:
- Automatically deploy and manage a SeaweedFS cluster
- Ability to be managed by other Operators
- Compability with seaweedfs-csi-driver
- Auto rolling upgrade and restart
- Ingress for volume server, filer and S3, to support HDFS, REST filer, S3 API and cross-cluster replication
- IAM (Identity and Access Management) service support for S3 API authentication and authorization
- Support all major cloud Kubernetes: AWS, Google, Azure
- Scheduled backup to cloud storage: S3, Google Cloud Storage , Azure
- Put warm data to cloud storage tier: S3, Google Cloud Storage , Azure
- Grafana dashboard
helm repo add seaweedfs-operator https://seaweedfs.github.io/seaweedfs-operator/
helm template seaweedfs-operator seaweedfs-operator/seaweedfs-operatorNote: For versions prior to 0.1.2, the legacy repository URL
https://seaweedfs.github.io/seaweedfs-operator/helmcan still be used, but new releases will only be published to the main repository URL above.
Add the following files to a new directory called seaweedfs-operator under your FluxCD GitRepository (publishing) directory.
kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- seaweedfs-operator-namespace.yaml
- seaweedfs-operator-helmrepository.yaml
- seaweedfs-operator-helmrelease.yamlseaweedfs-operator-namespace.yaml
apiVersion: v1
kind: Namespace
metadata:
name: seaweedfs-operatorseaweedfs-operator-helmrepository.yaml
apiVersion: source.toolkit.fluxcd.io/v1
kind: HelmRepository
metadata:
name: seaweedfs-operator
namespace: seaweedfs-operator
spec:
interval: 1h
url: https://seaweedfs.github.io/seaweedfs-operator/seaweedfs-operator-helmrelease.yaml
apiVersion: helm.toolkit.fluxcd.io/v2
kind: HelmRelease
metadata:
name: seaweedfs-operator
namespace: seaweedfs-operator
spec:
interval: 1h
chart:
spec:
chart: seaweedfs-operator
sourceRef:
kind: HelmRepository
name: seaweedfs-operator
namespace: seaweedfs-operator
values:
webhook:
enabled: falseNOTE: Due to an issue with the way the seaweedfs-operator-webhook-server-cert is created, .Values.webhook.enabled should be set to false initially, and then true later on. After the deployment is created, modify the seaweedfs-operator-helmrelease.yaml file to remove the values directive and everything underneath it.
This operator uses kustomize for deployment. Please install kustomize if you do not have it.
By default, the defaulting and validation webhooks are disabled. We strongly recommend to enable the webhooks.
First clone the repository:
git clone https://github.com/seaweedfs/seaweedfs-operator --depth=1To deploy the operator with webhooks enabled, make sure you have installed the cert-manager(Installation docs: https://cert-manager.io/docs/installation/) in your cluster, then follow the instructions in the config/default/kustomization.yaml file to uncomment the components you need.
Lastly, change the value of ENABLE_WEBHOOKS to "true" in config/manager/manager.yaml
Manager image must be locally built and published into a registry accessible from your k8s cluster:
export IMG=<registry/image:tag>
# Build and push for amd64
export TARGETARCH=amd64
# Optional if you want to change TARGETOS
# export TARGETOS=linux
make docker-build
# Build and push for arm64
export TARGETARCH=arm64
make docker-buildAfterwards fire up to install CRDs:
make installThen run the command to deploy the operator into your cluster using Kustomize or Helm:
# if using Kustomize
make deploy
# if using Helm
helm install seaweedfs-operator ./deploy/helmVerify it was correctly deployed:
kubectl get pods --all-namespacesWhich may return:
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system coredns-f9fd979d6-68p4c 1/1 Running 0 34m
kube-system coredns-f9fd979d6-x992t 1/1 Running 0 34m
kube-system etcd-kind-control-plane 1/1 Running 0 34m
kube-system kindnet-rp7wr 1/1 Running 0 34m
kube-system kube-apiserver-kind-control-plane 1/1 Running 0 34m
kube-system kube-controller-manager-kind-control-plane 1/1 Running 0 34m
kube-system kube-proxy-dqfg2 1/1 Running 0 34m
kube-system kube-scheduler-kind-control-plane 1/1 Running 0 34m
local-path-storage local-path-provisioner-78776bfc44-7zvxx 1/1 Running 0 34m
seaweedfs-operator-system seaweedfs-operator-controller-manager-54cc768f4c-cwz2k 2/2 Running 0 34mSee the next section for example usage - at this point you only deployed the Operator itself!
For detailed configuration options and examples, see the sample configurations in the config/samples/ directory.
The operator now supports IAM (Identity and Access Management) for S3 API authentication. IAM can be deployed in two ways:
- Standalone IAM Service: Deploy IAM as a separate service
- Embedded IAM: Run IAM embedded within filer pods
For complete IAM configuration details, examples, and deployment scenarios, see IAM_SUPPORT.md.
apiVersion: seaweed.seaweedfs.com/v1
kind: Seaweed
metadata:
name: seaweed-sample
namespace: default
spec:
image: chrislusf/seaweedfs:latest
volumeServerDiskCount: 1
hostSuffix: seaweed.abcdefg.com
master:
replicas: 3
volumeSizeLimitMB: 1024
volume:
replicas: 1
requests:
storage: 2Gi
filer:
replicas: 2
s3: true # Enable S3 API
iam: true # Enable embedded IAM
config: |
[leveldb2]
enabled = true
dir = "/data/filerldb2"
# Optional: Standalone IAM service
# iam:
# replicas: 1
# port: 8111For more examples including standalone IAM configurations, see the config/samples/ directory:
seaweed_v1_seaweed_with_iam_standalone.yamlseaweed_v1_seaweed_with_iam_embedded.yaml
- TBD
Follow the instructions in https://sdk.operatorframework.io/docs/building-operators/golang/quickstart/
# install and prepare kind-cluster for development
make kind-prepare
# build the operator image and load the image into Kind cluster
make kind-load
# deploy operator and CRDs
make deploy
# install example of CR
kubectl apply -f config/samples/seaweed_v1_seaweed.yaml
# or install example with IAM support
kubectl apply -f config/samples/seaweed_v1_seaweed_with_iam_standalone.yamlTo test the IAM implementation:
# Run IAM-specific tests
go test -v -run "IAM" ./api/v1
go test -v -run "TestCreateIAM|TestBuildIAM|TestLabelsForIAM" ./internal/controller
go test -v -run "Filer.*IAM|IAM.*Filer" ./internal/controller# rebuild and re-upload image to the kind
make kind-load
# redeploy operator and CRDs
make redeploy# register the CRD with the Kubernetes cluster
make install
# run the operator locally outside the Kubernetes cluster
make run ENABLE_WEBHOOKS=false
# From another terminal in the same directory
kubectl apply -f config/samples/seaweed_v1_seaweed.yaml