0

I'm using ConfigMap to switch on/off some functionality of the application in the pod. I have mounted it in the deployment like that:

volumes:
  - name: {{ .Chart.Name }}-config-volume
    projected:
      sources:
      - configMap:
          name: {{ .Chart.Name }}-content-config

then I have some configuration data in ConfigMap:

apiVersion: v1
kind: ConfigMap
metadata:
  name: {{ .Chart.Name }}-content-config
data:
  content.properties: |
    {
      "Enabled": false,
      "ApiEndpoint": "..."
    }

When the functionality is configured and ready to be enabled, I runkubectl edit cm and set "Enabled" to true. Application is reading file every 2 minutes and refreshing configuration respectively without restarting the pod. Ok, it's working, it's persisting through pod restarts.

But, if I'm doing helm upgrade to the next version - everything is reset and again has default values, e.g. "Enabled: false". Is there any way to make ConfigMap persistent no matter the upgrades?

1 Answer 1

2

Don't try to use two separate tools to manage your Kubernetes manifests. You should be able to manage this entirely in Helm.

For example, you can put the API endpoint value in deploy-time configuration

# values.yaml
apiEndpoint: https://...

Then when your Helm chart produces the ConfigMap, it can insert the values from your Helm-level configuration. Helm includes a toJson extension function that can encode an arbitrary value as JSON.

# templates/configmap.yaml
data:
  content.properties: |
    {
      "Enabled": {{ toJson .Values.contentEnabled }},
      "ApiEndpoint": {{ toJson .Values.apiEndpoint }}
    }

Then you can keep a reference set of override values (probably in source control, maybe managed in your CD system). If you need to change these values then you can use helm upgrade, and it will consistently redeploy everything from the rendered templates.

# deploy/values-dev.yaml
apiEndpoint: https://internal.example.com/api/
helm upgrade --install -f deploy/values-dev.yaml -n dev my-app .

Once Helm has deployed it, don't try to kubectl edit any of the resources (except maybe in very-short-term debugging scenarios, but if you do, make sure you put things back the way you found them).

(Some values of Helm have included a "3-way merge" that attempts to do what you describe. IME that has been more a source of confusion than anything helpful: if a deploy fails then Helm tries to do a merge between the previous version, the failed deploy, and the corrected version, and you inevitably wind up with something that's plainly right there in your template file not showing up in the cluster. A previous deploy pipeline went out of its way to explicitly uninstall the previous version specifically to get around the problems that 3-way merged introduced.)

Sign up to request clarification or add additional context in comments.

3 Comments

The issue that it'll require redeploying pod. Which is a problem for now - it has some downtime and it has some side effects - e.g. logs will be interrupted, etc.
If the text of your Deployment manifest doesn't change, then the Pods won't be deleted and recreated. You could use the helm-diff plugin to see what a change in values would apply to the cluster.
Hm, thanks, it works. I thought any change in .yaml would cause restart.

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.