Trouble mounting an EBS to a Pod in a Kubernetes cluster












2















The cluster that I use is bootstrapped using kubeadm and it's deployed on AWS.



sudo kubeadm version
kubeadm version: &version.Info{Major:"1", Minor:"12", GitVersion:"v1.12.2", GitCommit:"17c77c7898218073f14c8d573582e8d2313dc740", GitTreeState:"clean", BuildDate:"2018-10-24T06:51:33Z", GoVersion:"go1.10.4", Compiler:"gc", Platform:”linux/amd64"}


I am trying to configure a pod to mount a persistent volume (I don’t think about PV and PVC for the moment), this is the manifest I used:



apiVersion: v1
kind: Pod
metadata:
name: mongodb-aws
spec:
volumes:
- name: mongodb-data
awsElasticBlockStore:
volumeID: vol-xxxxxx
fsType: ext4
containers:
- image: mongo
name: mongodb
volumeMounts:
- name: mongodb-data
mountPath: /data/db
ports:
- containerPort: 27017
protocol: TCP


At first I had this error from the logs of the pod :



“ mount: special device /var/lib/kubelet/plugins/kubernetes.io/aws-ebs/mounts/vol-xxxx does not exist “


After some research, I discovered that I have to set a cloud provider and this is what I’ve tried to do for the 10 past hours, I tested many suggestions but none worked; I tried to tag all the resources used by the cluster as mentioned in: https://github.com/kubernetes/kubernetes/issues/53538#issuecomment-345942305, I also tried this official solution to run in-tree cloud providers with kubeadm : https://kubernetes.io/docs/concepts/cluster-administration/cloud-providers/ :



kubeadm_config.yml file:



apiVersion: kubeadm.k8s.io/v1alpha3
kind: InitConfiguration
nodeRegistration:
kubeletExtraArgs:
cloud-provider: "aws"
cloud-config: "/etc/kubernetes/cloud.conf"
---
kind: ClusterConfiguration
apiVersion: kubeadm.k8s.io/v1alpha3
kubernetesVersion: v1.12.0
apiServerExtraArgs:
cloud-provider: "aws"
cloud-config: "/etc/kubernetes/cloud.conf"
apiServerExtraVolumes:
- name: cloud
hostPath: "/etc/kubernetes/cloud.conf"
mountPath: "/etc/kubernetes/cloud.conf"
controllerManagerExtraArgs:
cloud-provider: "aws"
cloud-config: "/etc/kubernetes/cloud.conf"
controllerManagerExtraVolumes:
- name: cloud
hostPath: "/etc/kubernetes/cloud.conf"
mountPath: “/etc/kubernetes/cloud.conf"


In /etc/kubernetes/cloud.conf I put :



[Global] 
KubernetesClusterTag=kubernetes
KubernetesClusterID=kubernetes


After running kubeadm init --config kubeadm_config.yml I had these errors:



[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.

Unfortunately, an error has occurred:
timed out waiting for the condition

This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)

If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'

Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
Here is one example how you may list all Kubernetes containers running in docker:
- 'docker ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'docker logs CONTAINERID'
couldn't initialize a Kubernetes cluster


The Control Plane is not created


When I removed :



apiVersion: kubeadm.k8s.io/v1alpha3
kind: InitConfiguration
nodeRegistration:
kubeletExtraArgs:
cloud-provider: "aws"
cloud-config: "/etc/kubernetes/cloud.conf"


From kubeadm_config.yml and I run kubeadm init --config kubeadm_config.yml, the
Kubernetes master had initialized successfully, but when I executed : kubectl get pods —all-namespaces, I got:



NAMESPACE     NAME                                       READY   STATUS             RESTARTS   AGE
kube-system etcd-ip-172-31-31-160 1/1 Running 0 11m
kube-system kube-apiserver-ip-172-31-31-160 1/1 Running 0 11m
kube-system kube-controller-manager-ip-172-31-31-160 0/1 CrashLoopBackOff 6 11m
kube-system kube-scheduler-ip-172-31-31-160 1/1 Running 0 10m


The controller didn’t run.However the --cloud-provider=aws command-line flag is present for the apiserver (in /etc/kubernetes/manifests/kube-apiserver.yaml) and also for the controller manager ( /etc/kubernetes/manifests/kube-controller-manager.yaml )



When I run sudo kubectl logs kube-controller-manager-ip-172-31-13-85 -n kube-system I got:



Flag --address has been deprecated, see --bind-address instead.
I1126 11:27:35.006433 1 serving.go:293] Generated self-signed cert (/var/run/kubernetes/kube-controller-manager.crt, /var/run/kubernetes/kube-controller-manager.key)
I1126 11:27:35.811493 1 controllermanager.go:143] Version: v1.12.0
I1126 11:27:35.812091 1 secure_serving.go:116] Serving securely on [::]:10257
I1126 11:27:35.812605 1 deprecated_insecure_serving.go:50] Serving insecurely on 127.0.0.1:10252
I1126 11:27:35.812760 1 leaderelection.go:187] attempting to acquire leader lease kube-system/kube-controller-manager...
I1126 11:27:53.260484 1 leaderelection.go:196] successfully acquired lease kube-system/kube-controller-manager
I1126 11:27:53.261474 1 event.go:221] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"kube-controller-manager", UID:"b0da1291-f16d-11e8-baeb-02a38a37cfd6", APIVersion:"v1", ResourceVersion:"449", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' ip-172-31-13-85_4603714e-f16e-11e8-8d9d-02a38a37cfd6 became leader
I1126 11:27:53.290493 1 aws.go:1042] Building AWS cloudprovider
I1126 11:27:53.290642 1 aws.go:1004] Zone not specified in configuration file; querying AWS metadata service
F1126 11:27:53.296760 1 controllermanager.go:192] error building controller context: cloud provider could not be initialized: could not init cloud provider "aws": error finding instance i-0b063e2a3c9797398: "error listing AWS instances: "NoCredentialProviders: no valid providers in chain. Deprecated.\n\tFor verbose messaging see aws.Config.CredentialsChainVerboseErrors""



  • I didn’t try to downgrade kubeadm (to be able to use manifests with only kind: MasterConfiguration)


If you need more information, please feel free to ask.










share|improve this question

























  • What is in the log for the controller manager pod that is in CLBO?

    – Matthew L Daniel
    Nov 25 '18 at 22:16











  • I added it to the post, I fixed "Zone not specified in configuration" by adding Zone=eu-west-1b to the file*/etc/kubernetes/cloud.conf *, regarding the credentials, I don't need dynamic provisioning, I suppose I'm not forced to give the master the right to manage AWS resources (I cannot add them in Global anyway)

    – Pylint424
    Nov 26 '18 at 12:51











  • Well, then if you're not going to give the master the right to manage AWS resources, omit --cloud-provider entirely since that's everything that --cloud-provider would want to do: attach, detach (and optionally create) EBS volumes, update load balancers, perhaps subnets, discover instances, that kind of thing.

    – Matthew L Daniel
    Nov 27 '18 at 6:34











  • Thank you for the explanation, I didn't know that the master needs permission to mount/unmount EBS volumes, I thought it needs it just to create them, how can I do to give it the permissions (assign an IAM role to the instance where it runs is sufficient?)? and how can I configure Kubelet ?

    – Pylint424
    Nov 27 '18 at 12:31











  • Yes, the instance role is the normal way that is handled, and here is an example policy. I don't have the time to dig up the documentation links for the cloud-provider options right now, but studying the kubespray repo is a great way of seeing every single moving part without having to learn golang

    – Matthew L Daniel
    Nov 27 '18 at 17:20
















2















The cluster that I use is bootstrapped using kubeadm and it's deployed on AWS.



sudo kubeadm version
kubeadm version: &version.Info{Major:"1", Minor:"12", GitVersion:"v1.12.2", GitCommit:"17c77c7898218073f14c8d573582e8d2313dc740", GitTreeState:"clean", BuildDate:"2018-10-24T06:51:33Z", GoVersion:"go1.10.4", Compiler:"gc", Platform:”linux/amd64"}


I am trying to configure a pod to mount a persistent volume (I don’t think about PV and PVC for the moment), this is the manifest I used:



apiVersion: v1
kind: Pod
metadata:
name: mongodb-aws
spec:
volumes:
- name: mongodb-data
awsElasticBlockStore:
volumeID: vol-xxxxxx
fsType: ext4
containers:
- image: mongo
name: mongodb
volumeMounts:
- name: mongodb-data
mountPath: /data/db
ports:
- containerPort: 27017
protocol: TCP


At first I had this error from the logs of the pod :



“ mount: special device /var/lib/kubelet/plugins/kubernetes.io/aws-ebs/mounts/vol-xxxx does not exist “


After some research, I discovered that I have to set a cloud provider and this is what I’ve tried to do for the 10 past hours, I tested many suggestions but none worked; I tried to tag all the resources used by the cluster as mentioned in: https://github.com/kubernetes/kubernetes/issues/53538#issuecomment-345942305, I also tried this official solution to run in-tree cloud providers with kubeadm : https://kubernetes.io/docs/concepts/cluster-administration/cloud-providers/ :



kubeadm_config.yml file:



apiVersion: kubeadm.k8s.io/v1alpha3
kind: InitConfiguration
nodeRegistration:
kubeletExtraArgs:
cloud-provider: "aws"
cloud-config: "/etc/kubernetes/cloud.conf"
---
kind: ClusterConfiguration
apiVersion: kubeadm.k8s.io/v1alpha3
kubernetesVersion: v1.12.0
apiServerExtraArgs:
cloud-provider: "aws"
cloud-config: "/etc/kubernetes/cloud.conf"
apiServerExtraVolumes:
- name: cloud
hostPath: "/etc/kubernetes/cloud.conf"
mountPath: "/etc/kubernetes/cloud.conf"
controllerManagerExtraArgs:
cloud-provider: "aws"
cloud-config: "/etc/kubernetes/cloud.conf"
controllerManagerExtraVolumes:
- name: cloud
hostPath: "/etc/kubernetes/cloud.conf"
mountPath: “/etc/kubernetes/cloud.conf"


In /etc/kubernetes/cloud.conf I put :



[Global] 
KubernetesClusterTag=kubernetes
KubernetesClusterID=kubernetes


After running kubeadm init --config kubeadm_config.yml I had these errors:



[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.

Unfortunately, an error has occurred:
timed out waiting for the condition

This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)

If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'

Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
Here is one example how you may list all Kubernetes containers running in docker:
- 'docker ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'docker logs CONTAINERID'
couldn't initialize a Kubernetes cluster


The Control Plane is not created


When I removed :



apiVersion: kubeadm.k8s.io/v1alpha3
kind: InitConfiguration
nodeRegistration:
kubeletExtraArgs:
cloud-provider: "aws"
cloud-config: "/etc/kubernetes/cloud.conf"


From kubeadm_config.yml and I run kubeadm init --config kubeadm_config.yml, the
Kubernetes master had initialized successfully, but when I executed : kubectl get pods —all-namespaces, I got:



NAMESPACE     NAME                                       READY   STATUS             RESTARTS   AGE
kube-system etcd-ip-172-31-31-160 1/1 Running 0 11m
kube-system kube-apiserver-ip-172-31-31-160 1/1 Running 0 11m
kube-system kube-controller-manager-ip-172-31-31-160 0/1 CrashLoopBackOff 6 11m
kube-system kube-scheduler-ip-172-31-31-160 1/1 Running 0 10m


The controller didn’t run.However the --cloud-provider=aws command-line flag is present for the apiserver (in /etc/kubernetes/manifests/kube-apiserver.yaml) and also for the controller manager ( /etc/kubernetes/manifests/kube-controller-manager.yaml )



When I run sudo kubectl logs kube-controller-manager-ip-172-31-13-85 -n kube-system I got:



Flag --address has been deprecated, see --bind-address instead.
I1126 11:27:35.006433 1 serving.go:293] Generated self-signed cert (/var/run/kubernetes/kube-controller-manager.crt, /var/run/kubernetes/kube-controller-manager.key)
I1126 11:27:35.811493 1 controllermanager.go:143] Version: v1.12.0
I1126 11:27:35.812091 1 secure_serving.go:116] Serving securely on [::]:10257
I1126 11:27:35.812605 1 deprecated_insecure_serving.go:50] Serving insecurely on 127.0.0.1:10252
I1126 11:27:35.812760 1 leaderelection.go:187] attempting to acquire leader lease kube-system/kube-controller-manager...
I1126 11:27:53.260484 1 leaderelection.go:196] successfully acquired lease kube-system/kube-controller-manager
I1126 11:27:53.261474 1 event.go:221] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"kube-controller-manager", UID:"b0da1291-f16d-11e8-baeb-02a38a37cfd6", APIVersion:"v1", ResourceVersion:"449", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' ip-172-31-13-85_4603714e-f16e-11e8-8d9d-02a38a37cfd6 became leader
I1126 11:27:53.290493 1 aws.go:1042] Building AWS cloudprovider
I1126 11:27:53.290642 1 aws.go:1004] Zone not specified in configuration file; querying AWS metadata service
F1126 11:27:53.296760 1 controllermanager.go:192] error building controller context: cloud provider could not be initialized: could not init cloud provider "aws": error finding instance i-0b063e2a3c9797398: "error listing AWS instances: "NoCredentialProviders: no valid providers in chain. Deprecated.\n\tFor verbose messaging see aws.Config.CredentialsChainVerboseErrors""



  • I didn’t try to downgrade kubeadm (to be able to use manifests with only kind: MasterConfiguration)


If you need more information, please feel free to ask.










share|improve this question

























  • What is in the log for the controller manager pod that is in CLBO?

    – Matthew L Daniel
    Nov 25 '18 at 22:16











  • I added it to the post, I fixed "Zone not specified in configuration" by adding Zone=eu-west-1b to the file*/etc/kubernetes/cloud.conf *, regarding the credentials, I don't need dynamic provisioning, I suppose I'm not forced to give the master the right to manage AWS resources (I cannot add them in Global anyway)

    – Pylint424
    Nov 26 '18 at 12:51











  • Well, then if you're not going to give the master the right to manage AWS resources, omit --cloud-provider entirely since that's everything that --cloud-provider would want to do: attach, detach (and optionally create) EBS volumes, update load balancers, perhaps subnets, discover instances, that kind of thing.

    – Matthew L Daniel
    Nov 27 '18 at 6:34











  • Thank you for the explanation, I didn't know that the master needs permission to mount/unmount EBS volumes, I thought it needs it just to create them, how can I do to give it the permissions (assign an IAM role to the instance where it runs is sufficient?)? and how can I configure Kubelet ?

    – Pylint424
    Nov 27 '18 at 12:31











  • Yes, the instance role is the normal way that is handled, and here is an example policy. I don't have the time to dig up the documentation links for the cloud-provider options right now, but studying the kubespray repo is a great way of seeing every single moving part without having to learn golang

    – Matthew L Daniel
    Nov 27 '18 at 17:20














2












2








2








The cluster that I use is bootstrapped using kubeadm and it's deployed on AWS.



sudo kubeadm version
kubeadm version: &version.Info{Major:"1", Minor:"12", GitVersion:"v1.12.2", GitCommit:"17c77c7898218073f14c8d573582e8d2313dc740", GitTreeState:"clean", BuildDate:"2018-10-24T06:51:33Z", GoVersion:"go1.10.4", Compiler:"gc", Platform:”linux/amd64"}


I am trying to configure a pod to mount a persistent volume (I don’t think about PV and PVC for the moment), this is the manifest I used:



apiVersion: v1
kind: Pod
metadata:
name: mongodb-aws
spec:
volumes:
- name: mongodb-data
awsElasticBlockStore:
volumeID: vol-xxxxxx
fsType: ext4
containers:
- image: mongo
name: mongodb
volumeMounts:
- name: mongodb-data
mountPath: /data/db
ports:
- containerPort: 27017
protocol: TCP


At first I had this error from the logs of the pod :



“ mount: special device /var/lib/kubelet/plugins/kubernetes.io/aws-ebs/mounts/vol-xxxx does not exist “


After some research, I discovered that I have to set a cloud provider and this is what I’ve tried to do for the 10 past hours, I tested many suggestions but none worked; I tried to tag all the resources used by the cluster as mentioned in: https://github.com/kubernetes/kubernetes/issues/53538#issuecomment-345942305, I also tried this official solution to run in-tree cloud providers with kubeadm : https://kubernetes.io/docs/concepts/cluster-administration/cloud-providers/ :



kubeadm_config.yml file:



apiVersion: kubeadm.k8s.io/v1alpha3
kind: InitConfiguration
nodeRegistration:
kubeletExtraArgs:
cloud-provider: "aws"
cloud-config: "/etc/kubernetes/cloud.conf"
---
kind: ClusterConfiguration
apiVersion: kubeadm.k8s.io/v1alpha3
kubernetesVersion: v1.12.0
apiServerExtraArgs:
cloud-provider: "aws"
cloud-config: "/etc/kubernetes/cloud.conf"
apiServerExtraVolumes:
- name: cloud
hostPath: "/etc/kubernetes/cloud.conf"
mountPath: "/etc/kubernetes/cloud.conf"
controllerManagerExtraArgs:
cloud-provider: "aws"
cloud-config: "/etc/kubernetes/cloud.conf"
controllerManagerExtraVolumes:
- name: cloud
hostPath: "/etc/kubernetes/cloud.conf"
mountPath: “/etc/kubernetes/cloud.conf"


In /etc/kubernetes/cloud.conf I put :



[Global] 
KubernetesClusterTag=kubernetes
KubernetesClusterID=kubernetes


After running kubeadm init --config kubeadm_config.yml I had these errors:



[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.

Unfortunately, an error has occurred:
timed out waiting for the condition

This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)

If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'

Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
Here is one example how you may list all Kubernetes containers running in docker:
- 'docker ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'docker logs CONTAINERID'
couldn't initialize a Kubernetes cluster


The Control Plane is not created


When I removed :



apiVersion: kubeadm.k8s.io/v1alpha3
kind: InitConfiguration
nodeRegistration:
kubeletExtraArgs:
cloud-provider: "aws"
cloud-config: "/etc/kubernetes/cloud.conf"


From kubeadm_config.yml and I run kubeadm init --config kubeadm_config.yml, the
Kubernetes master had initialized successfully, but when I executed : kubectl get pods —all-namespaces, I got:



NAMESPACE     NAME                                       READY   STATUS             RESTARTS   AGE
kube-system etcd-ip-172-31-31-160 1/1 Running 0 11m
kube-system kube-apiserver-ip-172-31-31-160 1/1 Running 0 11m
kube-system kube-controller-manager-ip-172-31-31-160 0/1 CrashLoopBackOff 6 11m
kube-system kube-scheduler-ip-172-31-31-160 1/1 Running 0 10m


The controller didn’t run.However the --cloud-provider=aws command-line flag is present for the apiserver (in /etc/kubernetes/manifests/kube-apiserver.yaml) and also for the controller manager ( /etc/kubernetes/manifests/kube-controller-manager.yaml )



When I run sudo kubectl logs kube-controller-manager-ip-172-31-13-85 -n kube-system I got:



Flag --address has been deprecated, see --bind-address instead.
I1126 11:27:35.006433 1 serving.go:293] Generated self-signed cert (/var/run/kubernetes/kube-controller-manager.crt, /var/run/kubernetes/kube-controller-manager.key)
I1126 11:27:35.811493 1 controllermanager.go:143] Version: v1.12.0
I1126 11:27:35.812091 1 secure_serving.go:116] Serving securely on [::]:10257
I1126 11:27:35.812605 1 deprecated_insecure_serving.go:50] Serving insecurely on 127.0.0.1:10252
I1126 11:27:35.812760 1 leaderelection.go:187] attempting to acquire leader lease kube-system/kube-controller-manager...
I1126 11:27:53.260484 1 leaderelection.go:196] successfully acquired lease kube-system/kube-controller-manager
I1126 11:27:53.261474 1 event.go:221] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"kube-controller-manager", UID:"b0da1291-f16d-11e8-baeb-02a38a37cfd6", APIVersion:"v1", ResourceVersion:"449", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' ip-172-31-13-85_4603714e-f16e-11e8-8d9d-02a38a37cfd6 became leader
I1126 11:27:53.290493 1 aws.go:1042] Building AWS cloudprovider
I1126 11:27:53.290642 1 aws.go:1004] Zone not specified in configuration file; querying AWS metadata service
F1126 11:27:53.296760 1 controllermanager.go:192] error building controller context: cloud provider could not be initialized: could not init cloud provider "aws": error finding instance i-0b063e2a3c9797398: "error listing AWS instances: "NoCredentialProviders: no valid providers in chain. Deprecated.\n\tFor verbose messaging see aws.Config.CredentialsChainVerboseErrors""



  • I didn’t try to downgrade kubeadm (to be able to use manifests with only kind: MasterConfiguration)


If you need more information, please feel free to ask.










share|improve this question
















The cluster that I use is bootstrapped using kubeadm and it's deployed on AWS.



sudo kubeadm version
kubeadm version: &version.Info{Major:"1", Minor:"12", GitVersion:"v1.12.2", GitCommit:"17c77c7898218073f14c8d573582e8d2313dc740", GitTreeState:"clean", BuildDate:"2018-10-24T06:51:33Z", GoVersion:"go1.10.4", Compiler:"gc", Platform:”linux/amd64"}


I am trying to configure a pod to mount a persistent volume (I don’t think about PV and PVC for the moment), this is the manifest I used:



apiVersion: v1
kind: Pod
metadata:
name: mongodb-aws
spec:
volumes:
- name: mongodb-data
awsElasticBlockStore:
volumeID: vol-xxxxxx
fsType: ext4
containers:
- image: mongo
name: mongodb
volumeMounts:
- name: mongodb-data
mountPath: /data/db
ports:
- containerPort: 27017
protocol: TCP


At first I had this error from the logs of the pod :



“ mount: special device /var/lib/kubelet/plugins/kubernetes.io/aws-ebs/mounts/vol-xxxx does not exist “


After some research, I discovered that I have to set a cloud provider and this is what I’ve tried to do for the 10 past hours, I tested many suggestions but none worked; I tried to tag all the resources used by the cluster as mentioned in: https://github.com/kubernetes/kubernetes/issues/53538#issuecomment-345942305, I also tried this official solution to run in-tree cloud providers with kubeadm : https://kubernetes.io/docs/concepts/cluster-administration/cloud-providers/ :



kubeadm_config.yml file:



apiVersion: kubeadm.k8s.io/v1alpha3
kind: InitConfiguration
nodeRegistration:
kubeletExtraArgs:
cloud-provider: "aws"
cloud-config: "/etc/kubernetes/cloud.conf"
---
kind: ClusterConfiguration
apiVersion: kubeadm.k8s.io/v1alpha3
kubernetesVersion: v1.12.0
apiServerExtraArgs:
cloud-provider: "aws"
cloud-config: "/etc/kubernetes/cloud.conf"
apiServerExtraVolumes:
- name: cloud
hostPath: "/etc/kubernetes/cloud.conf"
mountPath: "/etc/kubernetes/cloud.conf"
controllerManagerExtraArgs:
cloud-provider: "aws"
cloud-config: "/etc/kubernetes/cloud.conf"
controllerManagerExtraVolumes:
- name: cloud
hostPath: "/etc/kubernetes/cloud.conf"
mountPath: “/etc/kubernetes/cloud.conf"


In /etc/kubernetes/cloud.conf I put :



[Global] 
KubernetesClusterTag=kubernetes
KubernetesClusterID=kubernetes


After running kubeadm init --config kubeadm_config.yml I had these errors:



[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.

Unfortunately, an error has occurred:
timed out waiting for the condition

This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)

If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'

Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
Here is one example how you may list all Kubernetes containers running in docker:
- 'docker ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'docker logs CONTAINERID'
couldn't initialize a Kubernetes cluster


The Control Plane is not created


When I removed :



apiVersion: kubeadm.k8s.io/v1alpha3
kind: InitConfiguration
nodeRegistration:
kubeletExtraArgs:
cloud-provider: "aws"
cloud-config: "/etc/kubernetes/cloud.conf"


From kubeadm_config.yml and I run kubeadm init --config kubeadm_config.yml, the
Kubernetes master had initialized successfully, but when I executed : kubectl get pods —all-namespaces, I got:



NAMESPACE     NAME                                       READY   STATUS             RESTARTS   AGE
kube-system etcd-ip-172-31-31-160 1/1 Running 0 11m
kube-system kube-apiserver-ip-172-31-31-160 1/1 Running 0 11m
kube-system kube-controller-manager-ip-172-31-31-160 0/1 CrashLoopBackOff 6 11m
kube-system kube-scheduler-ip-172-31-31-160 1/1 Running 0 10m


The controller didn’t run.However the --cloud-provider=aws command-line flag is present for the apiserver (in /etc/kubernetes/manifests/kube-apiserver.yaml) and also for the controller manager ( /etc/kubernetes/manifests/kube-controller-manager.yaml )



When I run sudo kubectl logs kube-controller-manager-ip-172-31-13-85 -n kube-system I got:



Flag --address has been deprecated, see --bind-address instead.
I1126 11:27:35.006433 1 serving.go:293] Generated self-signed cert (/var/run/kubernetes/kube-controller-manager.crt, /var/run/kubernetes/kube-controller-manager.key)
I1126 11:27:35.811493 1 controllermanager.go:143] Version: v1.12.0
I1126 11:27:35.812091 1 secure_serving.go:116] Serving securely on [::]:10257
I1126 11:27:35.812605 1 deprecated_insecure_serving.go:50] Serving insecurely on 127.0.0.1:10252
I1126 11:27:35.812760 1 leaderelection.go:187] attempting to acquire leader lease kube-system/kube-controller-manager...
I1126 11:27:53.260484 1 leaderelection.go:196] successfully acquired lease kube-system/kube-controller-manager
I1126 11:27:53.261474 1 event.go:221] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"kube-controller-manager", UID:"b0da1291-f16d-11e8-baeb-02a38a37cfd6", APIVersion:"v1", ResourceVersion:"449", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' ip-172-31-13-85_4603714e-f16e-11e8-8d9d-02a38a37cfd6 became leader
I1126 11:27:53.290493 1 aws.go:1042] Building AWS cloudprovider
I1126 11:27:53.290642 1 aws.go:1004] Zone not specified in configuration file; querying AWS metadata service
F1126 11:27:53.296760 1 controllermanager.go:192] error building controller context: cloud provider could not be initialized: could not init cloud provider "aws": error finding instance i-0b063e2a3c9797398: "error listing AWS instances: "NoCredentialProviders: no valid providers in chain. Deprecated.\n\tFor verbose messaging see aws.Config.CredentialsChainVerboseErrors""



  • I didn’t try to downgrade kubeadm (to be able to use manifests with only kind: MasterConfiguration)


If you need more information, please feel free to ask.







amazon-web-services kubernetes






share|improve this question















share|improve this question













share|improve this question




share|improve this question








edited Nov 26 '18 at 12:15







Pylint424

















asked Nov 25 '18 at 18:59









Pylint424Pylint424

155




155













  • What is in the log for the controller manager pod that is in CLBO?

    – Matthew L Daniel
    Nov 25 '18 at 22:16











  • I added it to the post, I fixed "Zone not specified in configuration" by adding Zone=eu-west-1b to the file*/etc/kubernetes/cloud.conf *, regarding the credentials, I don't need dynamic provisioning, I suppose I'm not forced to give the master the right to manage AWS resources (I cannot add them in Global anyway)

    – Pylint424
    Nov 26 '18 at 12:51











  • Well, then if you're not going to give the master the right to manage AWS resources, omit --cloud-provider entirely since that's everything that --cloud-provider would want to do: attach, detach (and optionally create) EBS volumes, update load balancers, perhaps subnets, discover instances, that kind of thing.

    – Matthew L Daniel
    Nov 27 '18 at 6:34











  • Thank you for the explanation, I didn't know that the master needs permission to mount/unmount EBS volumes, I thought it needs it just to create them, how can I do to give it the permissions (assign an IAM role to the instance where it runs is sufficient?)? and how can I configure Kubelet ?

    – Pylint424
    Nov 27 '18 at 12:31











  • Yes, the instance role is the normal way that is handled, and here is an example policy. I don't have the time to dig up the documentation links for the cloud-provider options right now, but studying the kubespray repo is a great way of seeing every single moving part without having to learn golang

    – Matthew L Daniel
    Nov 27 '18 at 17:20



















  • What is in the log for the controller manager pod that is in CLBO?

    – Matthew L Daniel
    Nov 25 '18 at 22:16











  • I added it to the post, I fixed "Zone not specified in configuration" by adding Zone=eu-west-1b to the file*/etc/kubernetes/cloud.conf *, regarding the credentials, I don't need dynamic provisioning, I suppose I'm not forced to give the master the right to manage AWS resources (I cannot add them in Global anyway)

    – Pylint424
    Nov 26 '18 at 12:51











  • Well, then if you're not going to give the master the right to manage AWS resources, omit --cloud-provider entirely since that's everything that --cloud-provider would want to do: attach, detach (and optionally create) EBS volumes, update load balancers, perhaps subnets, discover instances, that kind of thing.

    – Matthew L Daniel
    Nov 27 '18 at 6:34











  • Thank you for the explanation, I didn't know that the master needs permission to mount/unmount EBS volumes, I thought it needs it just to create them, how can I do to give it the permissions (assign an IAM role to the instance where it runs is sufficient?)? and how can I configure Kubelet ?

    – Pylint424
    Nov 27 '18 at 12:31











  • Yes, the instance role is the normal way that is handled, and here is an example policy. I don't have the time to dig up the documentation links for the cloud-provider options right now, but studying the kubespray repo is a great way of seeing every single moving part without having to learn golang

    – Matthew L Daniel
    Nov 27 '18 at 17:20

















What is in the log for the controller manager pod that is in CLBO?

– Matthew L Daniel
Nov 25 '18 at 22:16





What is in the log for the controller manager pod that is in CLBO?

– Matthew L Daniel
Nov 25 '18 at 22:16













I added it to the post, I fixed "Zone not specified in configuration" by adding Zone=eu-west-1b to the file*/etc/kubernetes/cloud.conf *, regarding the credentials, I don't need dynamic provisioning, I suppose I'm not forced to give the master the right to manage AWS resources (I cannot add them in Global anyway)

– Pylint424
Nov 26 '18 at 12:51





I added it to the post, I fixed "Zone not specified in configuration" by adding Zone=eu-west-1b to the file*/etc/kubernetes/cloud.conf *, regarding the credentials, I don't need dynamic provisioning, I suppose I'm not forced to give the master the right to manage AWS resources (I cannot add them in Global anyway)

– Pylint424
Nov 26 '18 at 12:51













Well, then if you're not going to give the master the right to manage AWS resources, omit --cloud-provider entirely since that's everything that --cloud-provider would want to do: attach, detach (and optionally create) EBS volumes, update load balancers, perhaps subnets, discover instances, that kind of thing.

– Matthew L Daniel
Nov 27 '18 at 6:34





Well, then if you're not going to give the master the right to manage AWS resources, omit --cloud-provider entirely since that's everything that --cloud-provider would want to do: attach, detach (and optionally create) EBS volumes, update load balancers, perhaps subnets, discover instances, that kind of thing.

– Matthew L Daniel
Nov 27 '18 at 6:34













Thank you for the explanation, I didn't know that the master needs permission to mount/unmount EBS volumes, I thought it needs it just to create them, how can I do to give it the permissions (assign an IAM role to the instance where it runs is sufficient?)? and how can I configure Kubelet ?

– Pylint424
Nov 27 '18 at 12:31





Thank you for the explanation, I didn't know that the master needs permission to mount/unmount EBS volumes, I thought it needs it just to create them, how can I do to give it the permissions (assign an IAM role to the instance where it runs is sufficient?)? and how can I configure Kubelet ?

– Pylint424
Nov 27 '18 at 12:31













Yes, the instance role is the normal way that is handled, and here is an example policy. I don't have the time to dig up the documentation links for the cloud-provider options right now, but studying the kubespray repo is a great way of seeing every single moving part without having to learn golang

– Matthew L Daniel
Nov 27 '18 at 17:20





Yes, the instance role is the normal way that is handled, and here is an example policy. I don't have the time to dig up the documentation links for the cloud-provider options right now, but studying the kubespray repo is a great way of seeing every single moving part without having to learn golang

– Matthew L Daniel
Nov 27 '18 at 17:20












0






active

oldest

votes











Your Answer






StackExchange.ifUsing("editor", function () {
StackExchange.using("externalEditor", function () {
StackExchange.using("snippets", function () {
StackExchange.snippets.init();
});
});
}, "code-snippets");

StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "1"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);

StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});

function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});


}
});














draft saved

draft discarded


















StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53470840%2ftrouble-mounting-an-ebs-to-a-pod-in-a-kubernetes-cluster%23new-answer', 'question_page');
}
);

Post as a guest















Required, but never shown

























0






active

oldest

votes








0






active

oldest

votes









active

oldest

votes






active

oldest

votes
















draft saved

draft discarded




















































Thanks for contributing an answer to Stack Overflow!


  • Please be sure to answer the question. Provide details and share your research!

But avoid



  • Asking for help, clarification, or responding to other answers.

  • Making statements based on opinion; back them up with references or personal experience.


To learn more, see our tips on writing great answers.




draft saved


draft discarded














StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53470840%2ftrouble-mounting-an-ebs-to-a-pod-in-a-kubernetes-cluster%23new-answer', 'question_page');
}
);

Post as a guest















Required, but never shown





















































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown

































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown







Popular posts from this blog

Create new schema in PostgreSQL using DBeaver

Deepest pit of an array with Javascript: test on Codility

Costa Masnaga