Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

No Pod reachable from outside the cluster (Error: 'dial tcp 10.200.57.3:9376: getsockopt: connection timed out') #12

Closed
jonashackt opened this issue Sep 3, 2018 · 8 comments

Comments

@jonashackt
Copy link
Owner

I followed this great hint here

do you have another service running in your cluster that you could try and access via kubeproxy?

and the instructions in the docs on how to access k8s services from outside, trying to access the k8s service hostnames as described inside the Debug Services Guide. I tried to use the following URL to access this service from outside the cluster:

https://external.k8s:6443/api/v1/namespaces/default/services/http:hostnames:80/proxy/

This just gives a 503 Service Unavailable:

Error: 'dial tcp 10.200.57.3:9376: getsockopt: connection timed out'
Trying to reach: 'http://10.200.57.3:9376/'
@jonashackt
Copy link
Owner Author

A curl directly from master-node with curl https://localhost:8080/api/v1/namespaces/default/services/http:hostnames:80/proxy/ -v gives:

*   Trying 127.0.0.1...
* TCP_NODELAY set
* Connected to localhost (127.0.0.1) port 8080 (#0)
* ALPN, offering h2
* ALPN, offering http/1.1
* successfully set certificate verify locations:
*   CAfile: /etc/ssl/certs/ca-certificates.crt
  CApath: /etc/ssl/certs
* TLSv1.2 (OUT), TLS handshake, Client hello (1):
* error:1408F10B:SSL routines:ssl3_get_record:wrong version number
* Closing connection 0
curl: (35) error:1408F10B:SSL routines:ssl3_get_record:wrong version number
@jonashackt
Copy link
Owner Author

jonashackt commented Sep 3, 2018

Oh, my bad - there´s no :port_name defined for hostnames service:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: hostnames
spec:
  selector:
    matchLabels:
      app: hostnames
  replicas: 3
  template:
    metadata:
      labels:
        app: hostnames
    spec:
      containers:
      - name: hostnames
        image: k8s.gcr.io/serve_hostname
        ports:
        - containerPort: 9376
          protocol: TCP

so we can leave that. Also we can leave out https: The correct curl should be: curl --cacert vagrant/certificates/ca.pem http://localhost:8080/api/v1/namespaces/default/services/hostnames/proxy/ -v. Now the error is the same as already reported above.

@jonashackt
Copy link
Owner Author

Is our brave kubernetes-the-hard-way inspired cluster´s apiserver able to connect to the services & pods of our cluster altogether (see kubernetes/dashboard#2340 (comment))??! 😭

@jonashackt
Copy link
Owner Author

jonashackt commented Sep 5, 2018

Seems that using the correct k8s cluster name for the kubectl configuration has somehow fixed the problem: cbe34d0

At least using the worker-nodes DNS names directly now gives access to the deployed apps. Accessing nginx or hello-world for example requires only the correct NodePort (as described here https://kubernetes.io/docs/tasks/access-application-cluster/service-access-application-cluster/). As a NodePort is exposed on every k8s node, you can access services regardless on which worker node you try to.

This means we are now able to do the first access practice "Access services through public IPs" from https://kubernetes.io/docs/tasks/administer-cluster/access-cluster-services/#accessing-services-running-on-the-cluster.

What remains open, is to use the second practice "Access services, nodes, or pods using the Proxy Verb. ", which is more recommended, since API server could do the authentication and authorization. The issue therefore remains open - it seems, that the API server couldn´t connect to the kube-proxy - maybe because, the API server has no access to the Flannel/kube-dns/Docker network (as described here kubernetes/dashboard#2340 (comment))??!

@jonashackt
Copy link
Owner Author

Created kelseyhightower/kubernetes-the-hard-way#389, as it seems a general design problem of the kubernetes cluster.

@jonashackt
Copy link
Owner Author

Hmm, seems that Flannel needs to be present on the master nodes: https://stackoverflow.com/a/39179200/4964553

jonashackt added a commit that referenced this issue Sep 26, 2018
…ter nodes, otherwise the kube-apiserver can´t access the services and pods inside the Flannel network on the worker nodes.
@jonashackt
Copy link
Owner Author

Now having Flannel also running on the master nodes, I´am able to access an application inside the Kubernetes cluster after executing the kubectl proxy command and accessing URLs like http://localhost:8001/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/#!/overview?namespace=default in the Browser!

@jonashackt
Copy link
Owner Author

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
1 participant