0

I've created deployment template using helm (v3.14.3) with support for setting initContainers. Last time I realized one of initContainers removed from values.yaml is still present in cluster. I tried various fixes, but I'm not able to force helm to remove it.

The way how I deploy chart is:

helm template site-wordpress ./web-chart \
  -f ./values-prod.yaml \
  --set image.tag=prod-61bdfc674d25c376f753849555ab74ce0b01a0dea617a185f8a7a5e33689445e

Can someone advise me on the issue? Here's the template:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: {{ include "app.fullname" . }}
  labels:
    {{- include "app.labels" . | nindent 4 }}
spec:
  {{- if not .Values.autoscaling.enabled }}
  replicas: {{ .Values.replicaCount }}
  {{- end }}
  {{- with .Values.strategy }}
  strategy:
    {{- toYaml . | nindent 4 }}
  {{- end }}
  selector:
    matchLabels:
      {{- include "app.selectorLabels" . | nindent 6 }}
      app.kubernetes.io/component: app
  template:
    metadata:
      annotations:
        # This will change whenever initContainers config changes (including when removed)
        checksum/initcontainers: {{ .Values.initContainers | default list | toYaml | sha256sum }}
        {{- with .Values.podAnnotations }}
        {{- toYaml . | nindent 8 }}
        {{- end }}
      labels:
        {{- include "app.labels" . | nindent 8 }}
        {{- with .Values.podLabels }}
        {{- toYaml . | nindent 8 }}
        {{- end }}
        app.kubernetes.io/component: app
    spec:
      {{- with .Values.imagePullSecrets }}
      imagePullSecrets:
        {{- toYaml . | nindent 8 }}
      {{- end }}
      serviceAccountName: {{ include "app.serviceAccountName" . }}
      securityContext:
        {{- toYaml .Values.podSecurityContext | nindent 8 }}
      {{- if .Values.initContainers }}
      initContainers:
        {{- range .Values.initContainers }}
        - name: {{ .name }}
          image: "{{ $.Values.image.repository }}:{{ $.Values.image.tag | default $.Chart.AppVersion }}"
          imagePullPolicy: {{ $.Values.image.pullPolicy }}
          {{- if .command }}
          command: {{ toJson .command }}
          {{- end }}
          {{- if .args }}
          args: {{ toJson .args }}
          {{- end }}
        {{- end }}
      {{- end }}
      containers:
        - name: {{ .Chart.Name }}
          securityContext:
            {{- toYaml .Values.securityContext | nindent 12 }}
          image: "{{ .Values.image.repository }}:{{ .Values.image.tag | default .Chart.AppVersion }}"
          imagePullPolicy: {{ .Values.image.pullPolicy }}
          {{- if .Values.command }}
          command: {{ toJson .Values.command }}
          {{- end }}
          ports:
            - name: http
              containerPort: {{ .Values.service.port }}
              protocol: TCP
          resources:
            {{- toYaml .Values.resources | nindent 12 }}
          {{- end }}
      {{- end }}
5
  • Set the property to an empty array. Commented Oct 14 at 21:05
  • The setup in the chart doesn't quite make sense to me: you're basically asking the administrator to supply the Kubernetes YAML initContainers: fragment, except with many of the options missing. Can your chart take a stronger opinion on what images it might want to run (if any) as initContainers:? Commented Oct 14 at 21:07
  • When the array is empty, both previously created initContainers remains in the cluster. Commented Oct 15 at 6:19
  • @DavidMaze I cut few lines to make it more clear. initContainers are used to run scripts needed by main container to start, like DB migrations, initial configs, pulling some configurations, etc. Commented Oct 15 at 6:21
  • @Daniel, it would be better if you preconfigured the initContainer's with your own custom docker image, either a single initContainer that takes arguments that your chart users can modify in the values yaml, or multiple initContainers with specific purposes that your chart users can enable or disable in the values.yaml. Allowing a free-for-all initContainers that can run any docker images defeat the purpose of creating a chart. Commented Oct 19 at 13:28

1 Answer 1

0

As others commented, the way this chart works is too generic and might not be the best idea to allow any initContainer from the values.

That being said, the error you are having might be due to the new deployment not working and so the POD not getting really replace, which would explain why you still see the initContainer. Can you confirm that when you deploy the hel mchart without any init container it does replace the latest deployment and POD? you can see the kubectl events to verify there isn't any error and also describe the deployment.

kubectl get events --sort-by=.metadata.creationTimestamp

Replace <deployment_name> below:

kubectl describe deployment <deployment_name>

Also confirm that the initContaienr you see is not coming from other configurations (for example some tools as Istio inject initContainers to every POD)

If this doesn't work please share the values.yaml and the deployment describe yaml.

Sign up to request clarification or add additional context in comments.

Comments

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.