Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Kubernetes Dashboard isn´t reachable. #10

Closed
jonashackt opened this issue Aug 31, 2018 · 4 comments
Closed

Kubernetes Dashboard isn´t reachable. #10

jonashackt opened this issue Aug 31, 2018 · 4 comments

Comments

@jonashackt
Copy link
Owner

jonashackt commented Aug 31, 2018

After deployment and kubectl proxy, the Dashboard access gives a timeout:

Error: 'dial tcp 10.200.27.2:8443: getsockopt: connection timed out'
Trying to reach: 'http://10.200.27.2:8443/'
@jonashackt
Copy link
Owner Author

jonashackt commented Aug 31, 2018

Looks like kubernetes/dashboard#2855 (comment):

This kind of errors always come from API server, not the application you are trying to reach. Check your cluster config and see if you can access other applications through service proxy first.

@jonashackt
Copy link
Owner Author

jonashackt commented Sep 3, 2018

Only a full debug session using curls as described by https://kubernetes.io/docs/tasks/debug-application-cluster/debug-service/#is-kube-proxy-proxying shed some light in the resolution problems, where kube-proxy didn´t always proxy right. The curls went to the hostnames k8s service (which was deployed earlier, as also described in the docs), that had the IP 10.32.0.130:

NAMESPACE     NAME                           TYPE        CLUSTER-IP    EXTERNAL-IP   PORT(S)         AGE
default       service/hostnames              ClusterIP   10.32.0.130   <none>        80/TCP          3d

The curls only worked sometimes, and sometimes not! After full rebuild of the cluster finally the curls worked as said in the docs:

vagrant@worker-0:~$ curl 10.32.0.130
hostnames-64fbcd9c87-zvsnb
vagrant@worker-0:~$ curl 10.32.0.130
hostnames-64fbcd9c87-6s5gs
vagrant@worker-0:~$ curl 10.32.0.130
hostnames-64fbcd9c87-hx5tk
vagrant@worker-0:~$ curl 10.32.0.130
hostnames-64fbcd9c87-zvsnb
vagrant@worker-0:~$ curl 10.32.0.130
hostnames-64fbcd9c87-6s5gs
vagrant@worker-0:~$ curl 10.32.0.130
hostnames-64fbcd9c87-hx5tk
vagrant@worker-0:~$ curl 10.32.0.130
hostnames-64fbcd9c87-hx5tk
vagrant@worker-0:~$ curl 10.32.0.130
hostnames-64fbcd9c87-6s5gs

Where those are all three pods, as ´kubectl get pods -l app=hostnames` shows:

NAME                         READY     STATUS    RESTARTS   AGE
hostnames-64fbcd9c87-6s5gs   1/1       Running   0          3d
hostnames-64fbcd9c87-hx5tk   1/1       Running   0          3d
hostnames-64fbcd9c87-zvsnb   1/1       Running   0          3d

All hostnames pods were answering finally correctly! Not quite sure if thats also related to earlier DNS problems, e.g. like 3091bbe

Now this working kube-proxy setup finnaly brought up new errors like this (from dashboard logging with kubectl logs kubernetes-dashboard-7b9c7bc8c9-pvgt9 -n kube-system --follow):

2018/08/31 09:21:29 http: TLS handshake error from 10.200.57.0:44806: tls: first record does not look like a TLS handshake
2018/08/31 09:21:38 http: TLS handshake error from 10.200.57.0:44808: remote error: tls: unknown certificate authority
@jonashackt
Copy link
Owner Author

Addressing this issue also with 9866c71

jonashackt added a commit that referenced this issue Sep 3, 2018
@jonashackt
Copy link
Owner Author

Seems, that there´s no app accessible from outside in general - not only Dashboard: #12

jonashackt added a commit that referenced this issue Sep 5, 2018
… admin access is now configured. minimal is commented out.
jonashackt added a commit that referenced this issue Sep 5, 2018
jonashackt added a commit that referenced this issue Sep 27, 2018
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
1 participant