Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Service & Pod access through kube-apiserver (kube-proxy) not working #389

Closed
jonashackt opened this issue Sep 26, 2018 · 1 comment
Closed

Comments

@jonashackt
Copy link

jonashackt commented Sep 26, 2018

The official docs about creating a k8s cluster from scratch (https://kubernetes.io/docs/setup/scratch/) recommend to run three daemons on every node (incl. master nodes): docker or another container runtime, kubelet, kube-proxy:

While the basic node services (kubelet, kube-proxy, docker) are typically started and managed using traditional system administration/automation approaches, the remaining master components of Kubernetes are all configured and managed by Kubernetes

and in here:

All nodes should run kube-proxy. (Running kube-proxy on a “master” node is not strictly required, but being consistent is easier.)

And etcd, Apiserver, Controller Manager, and Scheduler "are kept running by Kubernetes rather than by init".

I wonder, why this guide set´s up container runtime, kubelet and kube-proxy only on the worker nodes and why the control plane components are managed by systemd rather then ran as k8s pods:

For etcd, kube-apiserver, kube-controller-manager, and kube-scheduler, we recommend that you run these as containers

The thoughts behind this long introduction: I have fully comprehended your setup and can access application directly on the worker nodes using the workernode-hostname:NodePort (way No.1 in https://kubernetes.io/docs/tasks/administer-cluster/access-cluster-services/#ways-to-connect). But can´t access them through "using the Proxy Verb" (way No.2 in https://kubernetes.io/docs/tasks/administer-cluster/access-cluster-services/#ways-to-connect). As there are already people with the same issue around (e.g. kubernetes/dashboard#2340 (comment)), who followed the guide and experienced problems accessing their applications, I was wondering if the whole topic is not a detail problem rather then more a cluster design topic?!

Doesn´t the kube-apiserver need to have access to the kube-proxys like shown in this pic:
iptables-proxy-access and described here:https://kubernetes.io/docs/concepts/services-networking/service/#proxy-mode-iptables ?! But when networking plugins & DNS are only deployed on the worker nodes, the master node maybe can´t access the Services & Pods by design?

@jonashackt
Copy link
Author

Fixed: Problem was, that we used Flannel instead of the original hard-way manual IP routes. And Flannel was only installed and configured on the worker nodes - now the kube-apiserver couldn´t access the Services and Pods, that we´re available via Flannel only.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
1 participant