Kubernetes service unreachable from master node on EC2












0















I created a k8s cluster on AWS, using kubeadm, with 1 master and 1 worker following the guide available here.



Then, I started 1 ElasticSearch container:



kubectl run elastic --image=elasticsearch:2 --replicas=1


And it was deployed successfully on worker. Then, I try to expose it as a service on cluster:



kubectl expose deploy/elastic --port 9200


And it was exposed successfully:



NAMESPACE     NAME                                                     READY     STATUS    RESTARTS   AGE
default elastic-664569cb68-flrrz 1/1 Running 0 16m
kube-system etcd-ip-172-31-140-179.ec2.internal 1/1 Running 0 16m
kube-system kube-apiserver-ip-172-31-140-179.ec2.internal 1/1 Running 0 16m
kube-system kube-controller-manager-ip-172-31-140-179.ec2.internal 1/1 Running 0 16m
kube-system kube-dns-86f4d74b45-mc24s 3/3 Running 0 17m
kube-system kube-flannel-ds-fjkkc 1/1 Running 0 16m
kube-system kube-flannel-ds-zw4pq 1/1 Running 0 17m
kube-system kube-proxy-4c8lh 1/1 Running 0 17m
kube-system kube-proxy-zkfwn 1/1 Running 0 16m
kube-system kube-scheduler-ip-172-31-140-179.ec2.internal 1/1 Running 0 16m

NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
default elastic ClusterIP 10.96.141.188 <none> 9200/TCP 16m
default kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 17m
kube-system kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP 17m

NAMESPACE NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
kube-system kube-flannel-ds 2 2 2 2 2 beta.kubernetes.io/arch=amd64 17m
kube-system kube-proxy 2 2 2 2 2 <none> 17m

NAMESPACE NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
default elastic 1 1 1 1 16m
kube-system kube-dns 1 1 1 1 17m

NAMESPACE NAME DESIRED CURRENT READY AGE
default elastic-664569cb68 1 1 1 16m
kube-system kube-dns-86f4d74b45 1 1 1 17m


But, when I try to execute a curl to http://10.96.141.188:9200 (from the master node) I'm getting a timeout, and everything indicates that the generated cluster IP is not reachable from the master node. It's working only on worker node.



I tried everything I could found:



Add a bunch of rules to iptables



iptables -P FORWARD ACCEPT
iptables -I FORWARD 1 -i cni0 -j ACCEPT -m comment --comment "flannel subnet"
iptables -I FORWARD 1 -o cni0 -j ACCEPT -m comment --comment "flannel subnet"
iptables -t nat -A POSTROUTING -s 10.244.0.0/16 ! -d 10.244.0.0/16 -j MASQUERADE



  • Disable firewalld

  • Enable all ports on ec2 security policy (from everywhere)

  • Use different docker versions (1.13.1, 17.03, 17.06, 17.12)

  • Different k8s versions (1.9.0 ~1.9.6)

  • Differents CNI (flannel and weave)

  • Add some parameters to kubeadm init command (--node-name with FQDN and --apiserver-advertise-address with public master IP)


But none of this worked. It appears that is a specific issue on AWS, since the tutorial guide works fine on Linux Academy Cloud Server.



Is there anything else I could try?



Obs:
Currently, I'm using docker 1.13 and k8s 1.9.6 (with flannel 0.9.1) on Centos7.










share|improve this question



























    0















    I created a k8s cluster on AWS, using kubeadm, with 1 master and 1 worker following the guide available here.



    Then, I started 1 ElasticSearch container:



    kubectl run elastic --image=elasticsearch:2 --replicas=1


    And it was deployed successfully on worker. Then, I try to expose it as a service on cluster:



    kubectl expose deploy/elastic --port 9200


    And it was exposed successfully:



    NAMESPACE     NAME                                                     READY     STATUS    RESTARTS   AGE
    default elastic-664569cb68-flrrz 1/1 Running 0 16m
    kube-system etcd-ip-172-31-140-179.ec2.internal 1/1 Running 0 16m
    kube-system kube-apiserver-ip-172-31-140-179.ec2.internal 1/1 Running 0 16m
    kube-system kube-controller-manager-ip-172-31-140-179.ec2.internal 1/1 Running 0 16m
    kube-system kube-dns-86f4d74b45-mc24s 3/3 Running 0 17m
    kube-system kube-flannel-ds-fjkkc 1/1 Running 0 16m
    kube-system kube-flannel-ds-zw4pq 1/1 Running 0 17m
    kube-system kube-proxy-4c8lh 1/1 Running 0 17m
    kube-system kube-proxy-zkfwn 1/1 Running 0 16m
    kube-system kube-scheduler-ip-172-31-140-179.ec2.internal 1/1 Running 0 16m

    NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
    default elastic ClusterIP 10.96.141.188 <none> 9200/TCP 16m
    default kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 17m
    kube-system kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP 17m

    NAMESPACE NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
    kube-system kube-flannel-ds 2 2 2 2 2 beta.kubernetes.io/arch=amd64 17m
    kube-system kube-proxy 2 2 2 2 2 <none> 17m

    NAMESPACE NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
    default elastic 1 1 1 1 16m
    kube-system kube-dns 1 1 1 1 17m

    NAMESPACE NAME DESIRED CURRENT READY AGE
    default elastic-664569cb68 1 1 1 16m
    kube-system kube-dns-86f4d74b45 1 1 1 17m


    But, when I try to execute a curl to http://10.96.141.188:9200 (from the master node) I'm getting a timeout, and everything indicates that the generated cluster IP is not reachable from the master node. It's working only on worker node.



    I tried everything I could found:



    Add a bunch of rules to iptables



    iptables -P FORWARD ACCEPT
    iptables -I FORWARD 1 -i cni0 -j ACCEPT -m comment --comment "flannel subnet"
    iptables -I FORWARD 1 -o cni0 -j ACCEPT -m comment --comment "flannel subnet"
    iptables -t nat -A POSTROUTING -s 10.244.0.0/16 ! -d 10.244.0.0/16 -j MASQUERADE



    • Disable firewalld

    • Enable all ports on ec2 security policy (from everywhere)

    • Use different docker versions (1.13.1, 17.03, 17.06, 17.12)

    • Different k8s versions (1.9.0 ~1.9.6)

    • Differents CNI (flannel and weave)

    • Add some parameters to kubeadm init command (--node-name with FQDN and --apiserver-advertise-address with public master IP)


    But none of this worked. It appears that is a specific issue on AWS, since the tutorial guide works fine on Linux Academy Cloud Server.



    Is there anything else I could try?



    Obs:
    Currently, I'm using docker 1.13 and k8s 1.9.6 (with flannel 0.9.1) on Centos7.










    share|improve this question

























      0












      0








      0








      I created a k8s cluster on AWS, using kubeadm, with 1 master and 1 worker following the guide available here.



      Then, I started 1 ElasticSearch container:



      kubectl run elastic --image=elasticsearch:2 --replicas=1


      And it was deployed successfully on worker. Then, I try to expose it as a service on cluster:



      kubectl expose deploy/elastic --port 9200


      And it was exposed successfully:



      NAMESPACE     NAME                                                     READY     STATUS    RESTARTS   AGE
      default elastic-664569cb68-flrrz 1/1 Running 0 16m
      kube-system etcd-ip-172-31-140-179.ec2.internal 1/1 Running 0 16m
      kube-system kube-apiserver-ip-172-31-140-179.ec2.internal 1/1 Running 0 16m
      kube-system kube-controller-manager-ip-172-31-140-179.ec2.internal 1/1 Running 0 16m
      kube-system kube-dns-86f4d74b45-mc24s 3/3 Running 0 17m
      kube-system kube-flannel-ds-fjkkc 1/1 Running 0 16m
      kube-system kube-flannel-ds-zw4pq 1/1 Running 0 17m
      kube-system kube-proxy-4c8lh 1/1 Running 0 17m
      kube-system kube-proxy-zkfwn 1/1 Running 0 16m
      kube-system kube-scheduler-ip-172-31-140-179.ec2.internal 1/1 Running 0 16m

      NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
      default elastic ClusterIP 10.96.141.188 <none> 9200/TCP 16m
      default kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 17m
      kube-system kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP 17m

      NAMESPACE NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
      kube-system kube-flannel-ds 2 2 2 2 2 beta.kubernetes.io/arch=amd64 17m
      kube-system kube-proxy 2 2 2 2 2 <none> 17m

      NAMESPACE NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
      default elastic 1 1 1 1 16m
      kube-system kube-dns 1 1 1 1 17m

      NAMESPACE NAME DESIRED CURRENT READY AGE
      default elastic-664569cb68 1 1 1 16m
      kube-system kube-dns-86f4d74b45 1 1 1 17m


      But, when I try to execute a curl to http://10.96.141.188:9200 (from the master node) I'm getting a timeout, and everything indicates that the generated cluster IP is not reachable from the master node. It's working only on worker node.



      I tried everything I could found:



      Add a bunch of rules to iptables



      iptables -P FORWARD ACCEPT
      iptables -I FORWARD 1 -i cni0 -j ACCEPT -m comment --comment "flannel subnet"
      iptables -I FORWARD 1 -o cni0 -j ACCEPT -m comment --comment "flannel subnet"
      iptables -t nat -A POSTROUTING -s 10.244.0.0/16 ! -d 10.244.0.0/16 -j MASQUERADE



      • Disable firewalld

      • Enable all ports on ec2 security policy (from everywhere)

      • Use different docker versions (1.13.1, 17.03, 17.06, 17.12)

      • Different k8s versions (1.9.0 ~1.9.6)

      • Differents CNI (flannel and weave)

      • Add some parameters to kubeadm init command (--node-name with FQDN and --apiserver-advertise-address with public master IP)


      But none of this worked. It appears that is a specific issue on AWS, since the tutorial guide works fine on Linux Academy Cloud Server.



      Is there anything else I could try?



      Obs:
      Currently, I'm using docker 1.13 and k8s 1.9.6 (with flannel 0.9.1) on Centos7.










      share|improve this question














      I created a k8s cluster on AWS, using kubeadm, with 1 master and 1 worker following the guide available here.



      Then, I started 1 ElasticSearch container:



      kubectl run elastic --image=elasticsearch:2 --replicas=1


      And it was deployed successfully on worker. Then, I try to expose it as a service on cluster:



      kubectl expose deploy/elastic --port 9200


      And it was exposed successfully:



      NAMESPACE     NAME                                                     READY     STATUS    RESTARTS   AGE
      default elastic-664569cb68-flrrz 1/1 Running 0 16m
      kube-system etcd-ip-172-31-140-179.ec2.internal 1/1 Running 0 16m
      kube-system kube-apiserver-ip-172-31-140-179.ec2.internal 1/1 Running 0 16m
      kube-system kube-controller-manager-ip-172-31-140-179.ec2.internal 1/1 Running 0 16m
      kube-system kube-dns-86f4d74b45-mc24s 3/3 Running 0 17m
      kube-system kube-flannel-ds-fjkkc 1/1 Running 0 16m
      kube-system kube-flannel-ds-zw4pq 1/1 Running 0 17m
      kube-system kube-proxy-4c8lh 1/1 Running 0 17m
      kube-system kube-proxy-zkfwn 1/1 Running 0 16m
      kube-system kube-scheduler-ip-172-31-140-179.ec2.internal 1/1 Running 0 16m

      NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
      default elastic ClusterIP 10.96.141.188 <none> 9200/TCP 16m
      default kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 17m
      kube-system kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP 17m

      NAMESPACE NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
      kube-system kube-flannel-ds 2 2 2 2 2 beta.kubernetes.io/arch=amd64 17m
      kube-system kube-proxy 2 2 2 2 2 <none> 17m

      NAMESPACE NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
      default elastic 1 1 1 1 16m
      kube-system kube-dns 1 1 1 1 17m

      NAMESPACE NAME DESIRED CURRENT READY AGE
      default elastic-664569cb68 1 1 1 16m
      kube-system kube-dns-86f4d74b45 1 1 1 17m


      But, when I try to execute a curl to http://10.96.141.188:9200 (from the master node) I'm getting a timeout, and everything indicates that the generated cluster IP is not reachable from the master node. It's working only on worker node.



      I tried everything I could found:



      Add a bunch of rules to iptables



      iptables -P FORWARD ACCEPT
      iptables -I FORWARD 1 -i cni0 -j ACCEPT -m comment --comment "flannel subnet"
      iptables -I FORWARD 1 -o cni0 -j ACCEPT -m comment --comment "flannel subnet"
      iptables -t nat -A POSTROUTING -s 10.244.0.0/16 ! -d 10.244.0.0/16 -j MASQUERADE



      • Disable firewalld

      • Enable all ports on ec2 security policy (from everywhere)

      • Use different docker versions (1.13.1, 17.03, 17.06, 17.12)

      • Different k8s versions (1.9.0 ~1.9.6)

      • Differents CNI (flannel and weave)

      • Add some parameters to kubeadm init command (--node-name with FQDN and --apiserver-advertise-address with public master IP)


      But none of this worked. It appears that is a specific issue on AWS, since the tutorial guide works fine on Linux Academy Cloud Server.



      Is there anything else I could try?



      Obs:
      Currently, I'm using docker 1.13 and k8s 1.9.6 (with flannel 0.9.1) on Centos7.







      amazon-ec2 kubernetes kubeadm






      share|improve this question













      share|improve this question











      share|improve this question




      share|improve this question










      asked Apr 3 '18 at 14:06









      DanielSPDanielSP

      9118




      9118
























          2 Answers
          2






          active

          oldest

          votes


















          1














          I finally found the problem. According to this page, Flannel needs to open ports UDP 8285 and 8472 on both Master and Worker node. It's interesting that this is not mentioned at official kubeadm documentation.






          share|improve this answer































            0















            kubectl run elastic --image=elasticsearch:2 --replicas=1




            As best I can tell, you did not inform kubernetes that the elasticsearch:2 image listens on any port(s), which it will not infer by itself. You would have experienced the same problem if you had just run that image under docker without similarly specifying the --publish or --publish-all options.



            Thus, when the ClusterIP attempts to forward traffic from port 9200 to the Pods matching its selector, those packets fall into /dev/null because the container is not listening for them.




            Add a bunch of rules to iptables




            Definitely don't do that; if you observed, there are already a ton of iptables rules that are managed by kube-proxy: in fact, its primary job in life is to own the iptables rules on the Node upon which it is running. Your rules only serve to confuse both kube-proxy as well as any person who follows along behind you, trying to work out where those random rules came from. If you haven't already made them permanent, then either undo them or just reboot the machine to flush those tables. Leaving your ad-hoc rules in place will 100% not make your troubleshooting process any easier.






            share|improve this answer
























            • The port is assigned on second step (service creation). It is working, I can curl successfully to 10.96.141.188:9200 from the worker node. But you are absolutely right about the iptables!

              – DanielSP
              Apr 4 '18 at 19:01













            • The port is assigned on second step (service creation) yes, I am aware the Service port is assigned, but the Service port is the contract of the ClusterIP and the container port is the contract between the Pod and its containers. They can, and in my cluster absolutely do, differ because they represent differing levels of long-term promise to other members of the cluster.

              – Matthew L Daniel
              Apr 5 '18 at 3:41











            Your Answer






            StackExchange.ifUsing("editor", function () {
            StackExchange.using("externalEditor", function () {
            StackExchange.using("snippets", function () {
            StackExchange.snippets.init();
            });
            });
            }, "code-snippets");

            StackExchange.ready(function() {
            var channelOptions = {
            tags: "".split(" "),
            id: "1"
            };
            initTagRenderer("".split(" "), "".split(" "), channelOptions);

            StackExchange.using("externalEditor", function() {
            // Have to fire editor after snippets, if snippets enabled
            if (StackExchange.settings.snippets.snippetsEnabled) {
            StackExchange.using("snippets", function() {
            createEditor();
            });
            }
            else {
            createEditor();
            }
            });

            function createEditor() {
            StackExchange.prepareEditor({
            heartbeatType: 'answer',
            autoActivateHeartbeat: false,
            convertImagesToLinks: true,
            noModals: true,
            showLowRepImageUploadWarning: true,
            reputationToPostImages: 10,
            bindNavPrevention: true,
            postfix: "",
            imageUploader: {
            brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
            contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
            allowUrls: true
            },
            onDemand: true,
            discardSelector: ".discard-answer"
            ,immediatelyShowMarkdownHelp:true
            });


            }
            });














            draft saved

            draft discarded


















            StackExchange.ready(
            function () {
            StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f49632172%2fkubernetes-service-unreachable-from-master-node-on-ec2%23new-answer', 'question_page');
            }
            );

            Post as a guest















            Required, but never shown

























            2 Answers
            2






            active

            oldest

            votes








            2 Answers
            2






            active

            oldest

            votes









            active

            oldest

            votes






            active

            oldest

            votes









            1














            I finally found the problem. According to this page, Flannel needs to open ports UDP 8285 and 8472 on both Master and Worker node. It's interesting that this is not mentioned at official kubeadm documentation.






            share|improve this answer




























              1














              I finally found the problem. According to this page, Flannel needs to open ports UDP 8285 and 8472 on both Master and Worker node. It's interesting that this is not mentioned at official kubeadm documentation.






              share|improve this answer


























                1












                1








                1







                I finally found the problem. According to this page, Flannel needs to open ports UDP 8285 and 8472 on both Master and Worker node. It's interesting that this is not mentioned at official kubeadm documentation.






                share|improve this answer













                I finally found the problem. According to this page, Flannel needs to open ports UDP 8285 and 8472 on both Master and Worker node. It's interesting that this is not mentioned at official kubeadm documentation.







                share|improve this answer












                share|improve this answer



                share|improve this answer










                answered Apr 4 '18 at 19:14









                DanielSPDanielSP

                9118




                9118

























                    0















                    kubectl run elastic --image=elasticsearch:2 --replicas=1




                    As best I can tell, you did not inform kubernetes that the elasticsearch:2 image listens on any port(s), which it will not infer by itself. You would have experienced the same problem if you had just run that image under docker without similarly specifying the --publish or --publish-all options.



                    Thus, when the ClusterIP attempts to forward traffic from port 9200 to the Pods matching its selector, those packets fall into /dev/null because the container is not listening for them.




                    Add a bunch of rules to iptables




                    Definitely don't do that; if you observed, there are already a ton of iptables rules that are managed by kube-proxy: in fact, its primary job in life is to own the iptables rules on the Node upon which it is running. Your rules only serve to confuse both kube-proxy as well as any person who follows along behind you, trying to work out where those random rules came from. If you haven't already made them permanent, then either undo them or just reboot the machine to flush those tables. Leaving your ad-hoc rules in place will 100% not make your troubleshooting process any easier.






                    share|improve this answer
























                    • The port is assigned on second step (service creation). It is working, I can curl successfully to 10.96.141.188:9200 from the worker node. But you are absolutely right about the iptables!

                      – DanielSP
                      Apr 4 '18 at 19:01













                    • The port is assigned on second step (service creation) yes, I am aware the Service port is assigned, but the Service port is the contract of the ClusterIP and the container port is the contract between the Pod and its containers. They can, and in my cluster absolutely do, differ because they represent differing levels of long-term promise to other members of the cluster.

                      – Matthew L Daniel
                      Apr 5 '18 at 3:41
















                    0















                    kubectl run elastic --image=elasticsearch:2 --replicas=1




                    As best I can tell, you did not inform kubernetes that the elasticsearch:2 image listens on any port(s), which it will not infer by itself. You would have experienced the same problem if you had just run that image under docker without similarly specifying the --publish or --publish-all options.



                    Thus, when the ClusterIP attempts to forward traffic from port 9200 to the Pods matching its selector, those packets fall into /dev/null because the container is not listening for them.




                    Add a bunch of rules to iptables




                    Definitely don't do that; if you observed, there are already a ton of iptables rules that are managed by kube-proxy: in fact, its primary job in life is to own the iptables rules on the Node upon which it is running. Your rules only serve to confuse both kube-proxy as well as any person who follows along behind you, trying to work out where those random rules came from. If you haven't already made them permanent, then either undo them or just reboot the machine to flush those tables. Leaving your ad-hoc rules in place will 100% not make your troubleshooting process any easier.






                    share|improve this answer
























                    • The port is assigned on second step (service creation). It is working, I can curl successfully to 10.96.141.188:9200 from the worker node. But you are absolutely right about the iptables!

                      – DanielSP
                      Apr 4 '18 at 19:01













                    • The port is assigned on second step (service creation) yes, I am aware the Service port is assigned, but the Service port is the contract of the ClusterIP and the container port is the contract between the Pod and its containers. They can, and in my cluster absolutely do, differ because they represent differing levels of long-term promise to other members of the cluster.

                      – Matthew L Daniel
                      Apr 5 '18 at 3:41














                    0












                    0








                    0








                    kubectl run elastic --image=elasticsearch:2 --replicas=1




                    As best I can tell, you did not inform kubernetes that the elasticsearch:2 image listens on any port(s), which it will not infer by itself. You would have experienced the same problem if you had just run that image under docker without similarly specifying the --publish or --publish-all options.



                    Thus, when the ClusterIP attempts to forward traffic from port 9200 to the Pods matching its selector, those packets fall into /dev/null because the container is not listening for them.




                    Add a bunch of rules to iptables




                    Definitely don't do that; if you observed, there are already a ton of iptables rules that are managed by kube-proxy: in fact, its primary job in life is to own the iptables rules on the Node upon which it is running. Your rules only serve to confuse both kube-proxy as well as any person who follows along behind you, trying to work out where those random rules came from. If you haven't already made them permanent, then either undo them or just reboot the machine to flush those tables. Leaving your ad-hoc rules in place will 100% not make your troubleshooting process any easier.






                    share|improve this answer














                    kubectl run elastic --image=elasticsearch:2 --replicas=1




                    As best I can tell, you did not inform kubernetes that the elasticsearch:2 image listens on any port(s), which it will not infer by itself. You would have experienced the same problem if you had just run that image under docker without similarly specifying the --publish or --publish-all options.



                    Thus, when the ClusterIP attempts to forward traffic from port 9200 to the Pods matching its selector, those packets fall into /dev/null because the container is not listening for them.




                    Add a bunch of rules to iptables




                    Definitely don't do that; if you observed, there are already a ton of iptables rules that are managed by kube-proxy: in fact, its primary job in life is to own the iptables rules on the Node upon which it is running. Your rules only serve to confuse both kube-proxy as well as any person who follows along behind you, trying to work out where those random rules came from. If you haven't already made them permanent, then either undo them or just reboot the machine to flush those tables. Leaving your ad-hoc rules in place will 100% not make your troubleshooting process any easier.







                    share|improve this answer












                    share|improve this answer



                    share|improve this answer










                    answered Apr 4 '18 at 6:33









                    Matthew L DanielMatthew L Daniel

                    8,72612527




                    8,72612527













                    • The port is assigned on second step (service creation). It is working, I can curl successfully to 10.96.141.188:9200 from the worker node. But you are absolutely right about the iptables!

                      – DanielSP
                      Apr 4 '18 at 19:01













                    • The port is assigned on second step (service creation) yes, I am aware the Service port is assigned, but the Service port is the contract of the ClusterIP and the container port is the contract between the Pod and its containers. They can, and in my cluster absolutely do, differ because they represent differing levels of long-term promise to other members of the cluster.

                      – Matthew L Daniel
                      Apr 5 '18 at 3:41



















                    • The port is assigned on second step (service creation). It is working, I can curl successfully to 10.96.141.188:9200 from the worker node. But you are absolutely right about the iptables!

                      – DanielSP
                      Apr 4 '18 at 19:01













                    • The port is assigned on second step (service creation) yes, I am aware the Service port is assigned, but the Service port is the contract of the ClusterIP and the container port is the contract between the Pod and its containers. They can, and in my cluster absolutely do, differ because they represent differing levels of long-term promise to other members of the cluster.

                      – Matthew L Daniel
                      Apr 5 '18 at 3:41

















                    The port is assigned on second step (service creation). It is working, I can curl successfully to 10.96.141.188:9200 from the worker node. But you are absolutely right about the iptables!

                    – DanielSP
                    Apr 4 '18 at 19:01







                    The port is assigned on second step (service creation). It is working, I can curl successfully to 10.96.141.188:9200 from the worker node. But you are absolutely right about the iptables!

                    – DanielSP
                    Apr 4 '18 at 19:01















                    The port is assigned on second step (service creation) yes, I am aware the Service port is assigned, but the Service port is the contract of the ClusterIP and the container port is the contract between the Pod and its containers. They can, and in my cluster absolutely do, differ because they represent differing levels of long-term promise to other members of the cluster.

                    – Matthew L Daniel
                    Apr 5 '18 at 3:41





                    The port is assigned on second step (service creation) yes, I am aware the Service port is assigned, but the Service port is the contract of the ClusterIP and the container port is the contract between the Pod and its containers. They can, and in my cluster absolutely do, differ because they represent differing levels of long-term promise to other members of the cluster.

                    – Matthew L Daniel
                    Apr 5 '18 at 3:41


















                    draft saved

                    draft discarded




















































                    Thanks for contributing an answer to Stack Overflow!


                    • Please be sure to answer the question. Provide details and share your research!

                    But avoid



                    • Asking for help, clarification, or responding to other answers.

                    • Making statements based on opinion; back them up with references or personal experience.


                    To learn more, see our tips on writing great answers.




                    draft saved


                    draft discarded














                    StackExchange.ready(
                    function () {
                    StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f49632172%2fkubernetes-service-unreachable-from-master-node-on-ec2%23new-answer', 'question_page');
                    }
                    );

                    Post as a guest















                    Required, but never shown





















































                    Required, but never shown














                    Required, but never shown












                    Required, but never shown







                    Required, but never shown

































                    Required, but never shown














                    Required, but never shown












                    Required, but never shown







                    Required, but never shown







                    Popular posts from this blog

                    Costa Masnaga

                    Fotorealismo

                    Sidney Franklin