You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The official docs about creating a k8s cluster from scratch (https://kubernetes.io/docs/setup/scratch/) recommend to run three daemons on every node (incl. master nodes): docker or another container runtime, kubelet, kube-proxy:
While the basic node services (kubelet, kube-proxy, docker) are typically started and managed using traditional system administration/automation approaches, the remaining master components of Kubernetes are all configured and managed by Kubernetes
I wonder, why this guide set´s up container runtime, kubelet and kube-proxy only on the worker nodes and why the control plane components are managed by systemd rather then ran as k8s pods:
For etcd, kube-apiserver, kube-controller-manager, and kube-scheduler, we recommend that you run these as containers
Fixed: Problem was, that we used Flannel instead of the original hard-way manual IP routes. And Flannel was only installed and configured on the worker nodes - now the kube-apiserver couldn´t access the Services and Pods, that we´re available via Flannel only.
The official docs about creating a k8s cluster from scratch (https://kubernetes.io/docs/setup/scratch/) recommend to run three daemons on every node (incl. master nodes): docker or another container runtime, kubelet, kube-proxy:
and in here:
And etcd, Apiserver, Controller Manager, and Scheduler "are kept running by Kubernetes rather than by init".
I wonder, why this guide set´s up container runtime, kubelet and kube-proxy only on the worker nodes and why the control plane components are managed by systemd rather then ran as k8s pods:
The thoughts behind this long introduction: I have fully comprehended your setup and can access application directly on the worker nodes using the
workernode-hostname:NodePort
(way No.1 in https://kubernetes.io/docs/tasks/administer-cluster/access-cluster-services/#ways-to-connect). But can´t access them through "using the Proxy Verb" (way No.2 in https://kubernetes.io/docs/tasks/administer-cluster/access-cluster-services/#ways-to-connect). As there are already people with the same issue around (e.g. kubernetes/dashboard#2340 (comment)), who followed the guide and experienced problems accessing their applications, I was wondering if the whole topic is not a detail problem rather then more a cluster design topic?!Doesn´t the kube-apiserver need to have access to the kube-proxys like shown in this pic:
and described here:https://kubernetes.io/docs/concepts/services-networking/service/#proxy-mode-iptables ?! But when networking plugins & DNS are only deployed on the worker nodes, the master node maybe can´t access the Services & Pods by design?
The text was updated successfully, but these errors were encountered: