Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

kube-dashboard generates 502 error #2340

Closed
jeroenjacobs79 opened this issue Sep 6, 2017 · 23 comments
Closed

kube-dashboard generates 502 error #2340

jeroenjacobs79 opened this issue Sep 6, 2017 · 23 comments

Comments

@jeroenjacobs79
Copy link

jeroenjacobs79 commented Sep 6, 2017

Environment
Dashboard version: 1.6.3
Kubernetes version: 1.6.6
Operating system: Centos7

Steps to reproduce

Installed kube-dashboard according to the instructions provided on the website. When I run

kubectl proxy, I'm unable to access the Dashboard UI at http://localhost:8001/ui/. When I access http://Localhost:8001/ url, I see the kubernetes api stuff, so kubectl itself is working fine.

Observed result

Getting an 502 error on the following url: http://localhost:8001/api/v1/namespaces/kube-system/services/kubernetes-dashboard/proxy/

console output of kubectl proxy:

Starting to serve on 127.0.0.1:8001I0906 16:46:06.470208 31586 logs.go:41] http: proxy error: unexpected EOF

pod log of the dashboard container:

> $ kubectl logs kubernetes-dashboard-2870541577-czpdb --namespace kube-system                                                                                                                                  
Using HTTP port: 8443
Using in-cluster config to connect to apiserver
Using service account token for csrf signing
No request provided. Skipping authorization header
Successful initial request to the apiserver, version: v1.6.6
No request provided. Skipping authorization header
Creating in-cluster Heapster client
Expected result

Expected to see the Dashboard

Comments

I also have heapster installed, and it is able the kubernetes api just fine. So I guess that pod-networking, service-cidr networking, and service accounts itself are working fine. It's only kube-dashboard that is giving me issues.

This is the yml file I used to deploy Dashboard:

apiVersion: v1
kind: ServiceAccount
metadata:
  labels:
    app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
  name: kubernetes-dashboard
  labels:
    app: kubernetes-dashboard
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
- kind: ServiceAccount
  name: kubernetes-dashboard
  namespace: kube-system
---
kind: Deployment
apiVersion: extensions/v1beta1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kube-system
spec:
  replicas: 1
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      app: kubernetes-dashboard
  template:
    metadata:
      labels:
        app: kubernetes-dashboard
    spec:
      containers:
      - name: kubernetes-dashboard
        image: gcr.io/google_containers/kubernetes-dashboard-amd64:v1.6.3
        imagePullPolicy: Always
        ports:
        - containerPort: 9090
          protocol: TCP
        args:
          # Uncomment the following line to manually specify Kubernetes API server Host
          # If not specified, Dashboard will attempt to auto discover the API server and connect
          # to it. Uncomment only if the default does not work.
          # - --apiserver-host=http://my-address:port
        livenessProbe:
          httpGet:
            path: /
            port: 9090
          initialDelaySeconds: 30
          timeoutSeconds: 30
      serviceAccountName: kubernetes-dashboard
      # Comment the following tolerations if Dashboard must not be deployed on master
      tolerations:
      - key: node-role.kubernetes.io/master
        effect: NoSchedule
---
kind: Service
apiVersion: v1
metadata:
  labels:
    app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kube-system
spec:
  type: ClusterIP
  ports:
  - port: 80
    targetPort: 9090
  selector:
    app: kubernetes-dashboard

@jeroenjacobs79
Copy link
Author

It seems my pod keeps restarting:

> $ kubectl describe pod kubernetes-dashboard-2870541577-l92t7   --namespace kube-system                                                                                                                        
Name:           kubernetes-dashboard-2870541577-l92t7
Namespace:      kube-system
Node:           ip-10-8-31-179/10.8.31.179
Start Time:     Wed, 06 Sep 2017 17:17:02 +0200
Labels:         app=kubernetes-dashboard
                pod-template-hash=2870541577
Annotations:    kubernetes.io/created-by={"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicaSet","namespace":"kube-system","name":"kubernetes-dashboard-2870541577","uid":"6f84c7d3-9316-11e7-8...
Status:         Running
IP:             10.32.0.3
Controllers:    ReplicaSet/kubernetes-dashboard-2870541577
Containers:
  kubernetes-dashboard:
    Container ID:       docker://5a10948dff403809bbf643d4bd6ac969d5beb24e29ef904932b2b78a0fdaeb5b
    Image:              gcr.io/google_containers/kubernetes-dashboard-amd64:v1.6.3
    Image ID:           docker-pullable://gcr.io/google_containers/kubernetes-dashboard-amd64@sha256:2c4421ed80358a0ee97b44357b6cd6dc09be6ccc27dfe9d50c9bfc39a760e5fe
    Port:               9090/TCP
    State:              Running
      Started:          Wed, 06 Sep 2017 17:20:44 +0200
    Last State:         Terminated
      Reason:           Error
      Exit Code:        2
      Started:          Wed, 06 Sep 2017 17:20:05 +0200
      Finished:         Wed, 06 Sep 2017 17:20:43 +0200
    Ready:              True
    Restart Count:      5
    Liveness:           http-get http://:9090/ delay=30s timeout=30s period=10s #success=1 #failure=3
    Environment:        <none>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from kubernetes-dashboard-token-qxggc (ro)
Conditions:
  Type          Status
  Initialized   True 
  Ready         True 
  PodScheduled  True 
Volumes:
  kubernetes-dashboard-token-qxggc:
    Type:       Secret (a volume populated by a Secret)
    SecretName: kubernetes-dashboard-token-qxggc
    Optional:   false
QoS Class:      BestEffort
Node-Selectors: <none>
Tolerations:    node-role.kubernetes.io/master=:NoSchedule
Events:
  FirstSeen     LastSeen        Count   From                    SubObjectPath                           Type            Reason          Message
  ---------     --------        -----   ----                    -------------                           --------        ------          -------
  3m            3m              1       default-scheduler                                               Normal          Scheduled       Successfully assigned kubernetes-dashboard-2870541577-l92t7 to ip-10-8-31-179
  3m            3m              1       kubelet, ip-10-8-31-179 spec.containers{kubernetes-dashboard}   Normal          Created         Created container with id b529789e4168f829b3f51b0f892ecf0d44588dbd79bbae0afd448b67aed59909
  3m            3m              1       kubelet, ip-10-8-31-179 spec.containers{kubernetes-dashboard}   Normal          Started         Started container with id b529789e4168f829b3f51b0f892ecf0d44588dbd79bbae0afd448b67aed59909
  2m            2m              1       kubelet, ip-10-8-31-179 spec.containers{kubernetes-dashboard}   Normal          Killing         Killing container with id docker://b529789e4168f829b3f51b0f892ecf0d44588dbd79bbae0afd448b67aed59909:pod "kubernetes-dashboard-2870541577-l92t7_kube-system(6f86b442-9316-11e7-8771-0a16f54a32ca)" container "kubernetes-dashboard" is unhealthy, it will be killed and re-created.
  2m            2m              1       kubelet, ip-10-8-31-179 spec.containers{kubernetes-dashboard}   Normal          Created         Created container with id 3bc74f6d92b4281452385d422b50843af4a742e01b69cb7e468145634af71e70
  2m            2m              1       kubelet, ip-10-8-31-179 spec.containers{kubernetes-dashboard}   Normal          Started         Started container with id 3bc74f6d92b4281452385d422b50843af4a742e01b69cb7e468145634af71e70
  2m            2m              1       kubelet, ip-10-8-31-179 spec.containers{kubernetes-dashboard}   Normal          Killing         Killing container with id docker://3bc74f6d92b4281452385d422b50843af4a742e01b69cb7e468145634af71e70:pod "kubernetes-dashboard-2870541577-l92t7_kube-system(6f86b442-9316-11e7-8771-0a16f54a32ca)" container "kubernetes-dashboard" is unhealthy, it will be killed and re-created.
  2m            2m              1       kubelet, ip-10-8-31-179 spec.containers{kubernetes-dashboard}   Normal          Created         Created container with id 17026077c0406b5d57ebc82b0d0f4552ba5c02ddc21c986a13be83d644404007
  2m            2m              1       kubelet, ip-10-8-31-179 spec.containers{kubernetes-dashboard}   Normal          Started         Started container with id 17026077c0406b5d57ebc82b0d0f4552ba5c02ddc21c986a13be83d644404007
  1m            1m              1       kubelet, ip-10-8-31-179 spec.containers{kubernetes-dashboard}   Normal          Killing         Killing container with id docker://17026077c0406b5d57ebc82b0d0f4552ba5c02ddc21c986a13be83d644404007:pod "kubernetes-dashboard-2870541577-l92t7_kube-system(6f86b442-9316-11e7-8771-0a16f54a32ca)" container "kubernetes-dashboard" is unhealthy, it will be killed and re-created.
  1m            1m              1       kubelet, ip-10-8-31-179 spec.containers{kubernetes-dashboard}   Normal          Created         Created container with id 2c5b1da7496fc1058c072801535a6d24926a58b1d29d1706f3a8ae1c3ad75b36
  1m            1m              1       kubelet, ip-10-8-31-179 spec.containers{kubernetes-dashboard}   Normal          Started         Started container with id 2c5b1da7496fc1058c072801535a6d24926a58b1d29d1706f3a8ae1c3ad75b36
  51s           51s             1       kubelet, ip-10-8-31-179 spec.containers{kubernetes-dashboard}   Normal          Killing         Killing container with id docker://2c5b1da7496fc1058c072801535a6d24926a58b1d29d1706f3a8ae1c3ad75b36:pod "kubernetes-dashboard-2870541577-l92t7_kube-system(6f86b442-9316-11e7-8771-0a16f54a32ca)" container "kubernetes-dashboard" is unhealthy, it will be killed and re-created.
  50s           50s             1       kubelet, ip-10-8-31-179 spec.containers{kubernetes-dashboard}   Normal          Created         Created container with id 1050188028f969242419bd5e0ca81e6471306a93a03fd05ee69a3aa560f8d855
  49s           49s             1       kubelet, ip-10-8-31-179 spec.containers{kubernetes-dashboard}   Normal          Started         Started container with id 1050188028f969242419bd5e0ca81e6471306a93a03fd05ee69a3aa560f8d855
  3m            11s             6       kubelet, ip-10-8-31-179 spec.containers{kubernetes-dashboard}   Normal          Pulled          Successfully pulled image "gcr.io/google_containers/kubernetes-dashboard-amd64:v1.6.3"
  3m            11s             6       kubelet, ip-10-8-31-179 spec.containers{kubernetes-dashboard}   Normal          Pulling         pulling image "gcr.io/google_containers/kubernetes-dashboard-amd64:v1.6.3"
  11s           11s             1       kubelet, ip-10-8-31-179 spec.containers{kubernetes-dashboard}   Normal          Killing         Killing container with id docker://1050188028f969242419bd5e0ca81e6471306a93a03fd05ee69a3aa560f8d855:pod "kubernetes-dashboard-2870541577-l92t7_kube-system(6f86b442-9316-11e7-8771-0a16f54a32ca)" container "kubernetes-dashboard" is unhealthy, it will be killed and re-created.
  10s           10s             1       kubelet, ip-10-8-31-179 spec.containers{kubernetes-dashboard}   Normal          Created         Created container with id 5a10948dff403809bbf643d4bd6ac969d5beb24e29ef904932b2b78a0fdaeb5b
  10s           10s             1       kubelet, ip-10-8-31-179 spec.containers{kubernetes-dashboard}   Normal          Started         Started container with id 5a10948dff403809bbf643d4bd6ac969d5beb24e29ef904932b2b78a0fdaeb5b

How can I troubleshoot this if no errors are logged in the output of the kube-dashboard pod?

@floreks
Copy link
Member

floreks commented Sep 7, 2017

Run kubectl -n <pod_namespace> logs <pod_name> to get logs from a container and paste them here.

@jeroenjacobs79
Copy link
Author

Not dashboard-related I just discovered. Something is not behaving correctly with my pod-networking in the entire cluster.

@jeroenjacobs79
Copy link
Author

jeroenjacobs79 commented Sep 7, 2017

Seems I was wrong, it was not a pod networking issue.

The dashboard pods itself were killed by kubernetes itself. I removed the health-cheks from the manifest, and now the dashboard pod keeps running. It just took a bit longer for the dashboard to become responsive than the healthcheck anticipated.

However, it seems this issue occurs when accessing the dashboard with the "kubectl proxy" method. Within the cluster itself, the dashboard is working fine. Withe "kubectl proxy" method, I keep getting 502 errors.

@jeroenjacobs79 jeroenjacobs79 reopened this Sep 7, 2017
@floreks
Copy link
Member

floreks commented Sep 7, 2017

Try to set dashboard service type to NodePort and access it using <node_ip>:<node_port>. If it will work this way then it might be some issue with kubectl proxy.

@jeroenjacobs79
Copy link
Author

It works when I expose it as NodePort ans surf to a worker-node.

However, when using "kubectl proxy", I'm able to access to kubernetes api:

> $ curl http://localhost:8001/                                                                                                                                                                     
{
  "paths": [
    "/api",
    "/api/v1",
    "/apis",
    "/apis/apps",
    "/apis/apps/v1beta1",
    "/apis/authentication.k8s.io",
    "/apis/authentication.k8s.io/v1",
    "/apis/authentication.k8s.io/v1beta1",
    "/apis/authorization.k8s.io",
    "/apis/authorization.k8s.io/v1",
    "/apis/authorization.k8s.io/v1beta1",
    "/apis/autoscaling",
    "/apis/autoscaling/v1",
    "/apis/autoscaling/v2alpha1",
    "/apis/batch",
    "/apis/batch/v1",
    "/apis/batch/v2alpha1",
    "/apis/certificates.k8s.io",
    "/apis/certificates.k8s.io/v1beta1",
    "/apis/extensions",
    "/apis/extensions/v1beta1",
    "/apis/policy",
    "/apis/policy/v1beta1",
    "/apis/rbac.authorization.k8s.io",
    "/apis/rbac.authorization.k8s.io/v1alpha1",
    "/apis/rbac.authorization.k8s.io/v1beta1",
    "/apis/settings.k8s.io",
    "/apis/settings.k8s.io/v1alpha1",
    "/apis/storage.k8s.io",
    "/apis/storage.k8s.io/v1",
    "/apis/storage.k8s.io/v1beta1",
    "/healthz",
    "/healthz/ping",
    "/healthz/poststarthook/bootstrap-controller",
    "/healthz/poststarthook/ca-registration",
    "/healthz/poststarthook/extensions/third-party-resources",
    "/healthz/poststarthook/rbac/bootstrap-roles",
    "/logs",
    "/metrics",
    "/swaggerapi/",
    "/ui/",
    "/version"
  ]
}

Or like this:

> $ curl http://localhost:8001/api/v1/namespaces/kube-system/services/kubernetes-dashboard/                                                                                                         
{
  "kind": "Service",
  "apiVersion": "v1",
  "metadata": {
    "name": "kubernetes-dashboard",
    "namespace": "kube-system",
    "selfLink": "/api/v1/namespaces/kube-system/services/kubernetes-dashboard",
    "uid": "aeffb2e9-93ca-11e7-8db4-0a2a03e74a8a",
    "resourceVersion": "2836",
    "creationTimestamp": "2017-09-07T12:47:18Z",
    "labels": {
      "app": "kubernetes-dashboard"
    },
    "annotations": {
      "kubectl.kubernetes.io/last-applied-configuration": "{\"apiVersion\":\"v1\",\"kind\":\"Service\",\"metadata\":{\"annotations\":{},\"labels\":{\"app\":\"kubernetes-dashboard\"},\"name\":\"kubernetes-dashboard\",\"namespace\":\"kube-system\"},\"spec\":{\"ports\":[{\"nodePort\":30081,\"port\":80,\"targetPort\":9090}],\"selector\":{\"app\":\"kubernetes-dashboard\"},\"type\":\"NodePort\"}}\n"
    }
  },
  "spec": {
    "ports": [
      {
        "protocol": "TCP",
        "port": 80,
        "targetPort": 9090,
        "nodePort": 30081
      }
    ],
    "selector": {
      "app": "kubernetes-dashboard"
    },
    "clusterIP": "10.107.205.232",
    "type": "NodePort",
    "sessionAffinity": "None"
  },
  "status": {
    "loadBalancer": {}
  }
}%                                                                 

Now this is the full trace of trying to access the ui endpoints:

curl http://localhost:8001/ui/                                                                                                                                                                  
<a href="/api/v1/namespaces/kube-system/services/kubernetes-dashboard/proxy">Temporary Redirect</a>.

and now curl gives:

curl -v http://localhost:8001/api/v1/namespaces/kube-system/services/kubernetes-dashboard/proxy/                                                                                                
*   Trying 127.0.0.1...
* Connected to localhost (127.0.0.1) port 8001 (#0)
> GET /api/v1/namespaces/kube-system/services/kubernetes-dashboard/proxy/ HTTP/1.1
> Host: localhost:8001
> User-Agent: curl/7.47.0
> Accept: */*
> 
< HTTP/1.1 503 Service Unavailable
< Content-Length: 108
< Content-Type: text/plain; charset=utf-8
< Date: Thu, 07 Sep 2017 13:56:51 GMT
< 
Error: 'dial tcp 10.40.0.3:9090: getsockopt: connection timed out'
* Connection #0 to host localhost left intact
Trying to reach: 'http://10.40.0.3:9090/

This weird, 10.40.0.3 is the pod address of kube-dashboard. Why is trying to connect directly when using the "kubectl proxy" method?

@floreks
Copy link
Member

floreks commented Sep 7, 2017

I don't know. Maybe core guys responsible for it will know more. I don't think however that this is related to dashboard since it works fine when accessed directly (NodePort).

@jeroenjacobs79
Copy link
Author

I beg to differ, It's only dashboard that causes me issues. I'm able to access the kubernetes APi via "kubectl proxy" just fine.

@floreks
Copy link
Member

floreks commented Sep 7, 2017

There is a big difference between acessing kubernetes api and accessing applications over kubernetes service proxy.

@jeroenjacobs79
Copy link
Author

Oh, maybe important: I'm not running kube-apiserver, kube-scheduler and kube-controller-manager as pods. This is a cluster built from "kubernetes the hard way" tutorial. I don't know if this makes a difference for kube-dashboard and the proxy-method?

@jeroenjacobs79
Copy link
Author

Care to enlighten me on those differences?

@floreks
Copy link
Member

floreks commented Sep 7, 2017

It does not make the difference for dashboard but there might be an issue with cluster setup. In case of NodePort traffic is redirected through service directly to pod. With kubectl proxy you are running proxy tunnel and whole traffic goes through kube-apiserver before reaching application. That is why i.e. when using kubectl proxy any additional request headers are not passed to applications because apiserver drops them. It does not act as a pure reverse proxy.

@jeroenjacobs79
Copy link
Author

I also notice that dashboard is not listed here:

kubectl cluster-info                                                                                                                                                                             
Kubernetes master is running at https://52.214.xx.xx:6443
Heapster is running at https://52.214.xx.xx:6443/api/v1/proxy/namespaces/kube-system/services/heapster
KubeDNS is running at https://52.214.xx.xx:6443/api/v1/proxy/namespaces/kube-system/services/kube-dns
traefik-ingress-controller is running at https://52.214.xx.xx:6443/api/v1/proxy/namespaces/kube-system/services/traefik-ingress-controller

@floreks
Copy link
Member

floreks commented Sep 7, 2017

You can check how kubectl proxy works for you when cluster is provisioned using i.e. kubeadm. I can access dashboard over kubectl proxy without any troubles (using kubeadm-based cluster).

@floreks
Copy link
Member

floreks commented Sep 7, 2017

I also notice that dashboard is not listed here:

It won't be. Applications managed by addon manager are listed there and we have specifically dropped annotation that enables it.

@jeroenjacobs79
Copy link
Author

Kubeadm is not an option, I used it, I know how it works. My issue is why it doesn't work on MY cluster that is configured from scratch. I need HA, and kubeadm doesn't support that.

@jeroenjacobs79
Copy link
Author

Also, don't just suggest another deployment tool. I NEED to figure out how Kubernetes works, it's part of my job. And the K8s reference manuals are vague on the system operations part.

@rf232
Copy link
Contributor

rf232 commented Sep 7, 2017

do you have another service running in your cluster that you could try and access via kubeproxy?

If not you can create a very simple one like this:

kubectl run nginx --image nginx:latest --replicas 1
kubectl expose deployments/nginx --port 80
kubectl proxy &
curl -v localhost:8001/api/v1/namespaces/default/services/nginx/proxy/

this will create a deployment and service both named nginx in your default namespace and then try to contact the running nginx via the kubeproxy. If this also doesn't work something in the kubeproxy is not working. If however this works we can try to investigate further why the dashboard is not playing nicely with the proxy.

@floreks
Copy link
Member

floreks commented Sep 7, 2017

Also, don't just suggest another deployment tool.

I did it so you can check that with 100% correctly configured cluster dashboard is accessible over kubectl proxy and this is indeed not a dashboard issue. That is why I have suggested to ask for help on core repository. I don't have time to check kubernetes code and investigate differences between different ways of accessing applications in the cluster. Documentation is missing a lot of things and best way is either to look into the code or ask core community for help. They are more familiar with advanced topics such as manual setup of the whole cluster.

@floreks
Copy link
Member

floreks commented Sep 12, 2017

@jeroenjacobs1205 any progress here?

@maciaszczykm
Copy link
Member

@jeroenjacobs1205 Follow @rf232 steps and let us know what happened. We cannot help you unless we will be able to reproduce your "Kubernetes hard way" setup.

@jeroenjacobs79
Copy link
Author

Hi, yes, I solved the issue. I think the problem was caused by the fact that my api-server was not running in a pod (in kubeadm, the master processes run in pods). Since the requests were proxied through the api-server, but api-server had no access not the pod network or service-network (kube-proxy wasn't installed on my master nodes either), kube-apiserver is unable to access any services.

I now run all my master processes as pods (using static pod manifests) on the master nodes, and everything works fine. It makes sense when I think about it.

Thanks for your assistance with this issue. Next time, I should think a little harder on how all the components work together :-)

@jonashackt
Copy link

jonashackt commented Sep 26, 2018

Thx very much @jeroenjacobs1205 for your last comment! That brought me on the right track. But your answer is not completely correct: You don´t need to run kube-apiserver inside a Pod (although the "from-scratch-guide" recommends that, Kelsey Hightower ignores that recommendation though)! What you need is to make sure, that kube-apiserver has access to the network fabric used by your worker nodes (and thus Services, Pods - incl. a dashboard).

In my case, as I wanted to have a more comprehensible cloud provider independend setup with https://github.com/jonashackt/kubernetes-the-ansible-way using Vagrant, I choose not to go with the manual plumbed network routes of https://github.com/kelseyhightower/kubernetes-the-hard-way/blob/master/docs/11-pod-network-routes.md, but to go with one of the recommende networking solutions from https://kubernetes.io/docs/setup/scratch/#network-connectivity - where I choose Flannel.

The problem was I only installed and configured Flannel onto my worker nodes - thus getting the same errors like you. Now fixing the setup with having Flannel also on my master nodes (jonashackt/kubernetes-the-ansible-way@fcd203d), I can now flawlessly access all deployed K8s Services and Pods - incl. Dashboard.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
5 participants