Skip to content

kubenet/kubelet leaks ips on docker restart #34278

@bprashanth

Description

@bprashanth

Restart docker and you'll see the number of allocatd ips jumping:

# ls /var/lib/cni/networks/kubenet/ | wc -l
51

# sudo systemctl restart docker.service

# ls /var/lib/cni/networks/kubenet/ | wc -l
69

Do it enough times (or with enough pods, do it once) and the node is effectively unschedulable, but reports ready:

# ls /var/lib/cni/networks/kubenet/ | wc -l
254

# docker ps
CONTAINER ID        IMAGE                                                                  COMMAND                  CREATED             STATUS              PORTS               NAMES
1e60c680bb0f        gcr.io/google_containers/kube-proxy:fa5094377f97111eda3c700c05b313b2   "/bin/sh -c 'kube-pro"   2 minutes ago       Up 2 minutes                            k8s_kube-proxy.84f0299f_kube-proxy-e2e-test-beeps-minion-group-1uut_kube-system_90fd8ba4ce791b234b5cbbba324659b6_a32991a1
68d21830a8d4        gcr.io/google_containers/pause-amd64:3.0                               "/pause"                 2 minutes ago       Up 2 minutes                            k8s_POD.d8dbe16c_kube-proxy-e2e-test-beeps-minion-group-1uut_kube-system_90fd8ba4ce791b234b5cbbba324659b6_a8345dfb
47c4073a177f        bprashanth/kube-dnsmasq-amd64:0.1                                      "/usr/sbin/dnsmasq --"   2 minutes ago       Up 2 minutes                            k8s_dnsmasq.b185c2d1_dnsmasq-e2e-test-beeps-minion-group-1uut_kube-system_66b11a42154e4cd1f207bf98be04a70a_f2725cc2
d92c0f6f80b3        gcr.io/google_containers/pause-amd64:3.0                               "/pause"                 2 minutes ago       Up 2 minutes                            k8s_POD.d8dbe16c_dnsmasq-e2e-test-beeps-minion-group-1uut_kube-system_66b11a42154e4cd1f207bf98be04a70a_858b398a

Scheduling pods on it show

Tolerations:    <none>
Events:
  FirstSeen LastSeen    Count   From                        SubObjectPath   Type        Reason      Message
  --------- --------    -----   ----                        -------------   --------    ------      -------
  5s        1s      4   {kubelet e2e-test-beeps-minion-group-1uut}          Warning     FailedSync  Error syncing pod, skipping: failed to "SetupNetwork" for "echoheaders-3ty5d_default" with SetupNetworkError: "Failed to setup network for pod \"echoheaders-3ty5d_default(997a82ce-8c15-11e6-bbb0-42010af00002)\" using network plugins \"kubenet\": Error adding container to network: no IP addresses available in network: kubenet; Skipping pod"

I'm guessing this is because we don't release unused ips, but allocate new ones.
@kubernetes/sig-network @kubernetes/sig-node

Maybe we can fix this by invoking kubenet everytime kubelet notices a container restart?

Metadata

Metadata

Labels

kind/bugCategorizes issue or PR as related to a bug.priority/critical-urgentHighest priority. Must be actively worked on as someone's top priority right now.sig/networkCategorizes an issue or PR as relevant to SIG Network.

Type

No type

Projects

No projects

Milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions