0

I am failry new to Kubernetes and facing an issue where the call to my API's (deployed in k8 env) takes 10 seconds. Here the 10 seconds is not the response from the applciaiton but the time it takes to discover, I have hardcoded the response and still it is 10seconds, I have a simple ingress as below:

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: ingress-config
spec:
  rules:
  - http:
      paths:
      - pathType: Prefix
        path: "/admins"
        backend:
          service:
            name: admin-service
            port:
              number: 8070
      - pathType: Prefix
        path: "/employee"
        backend:
          service:
            name: employee-service
            port:
              number: 8080

and a simple service as below

apiVersion: v1
kind: Service
metadata:
  name: employee-service
spec:
  selector:
    name: employee-app
  ports:
    - protocol: TCP
      port: 8080
      targetPort: 8080

below is the applciation yaml

apiVersion: apps/v1
kind: Deployment
metadata:
  name: employee-app-deployment
  namespace: app-namespace
spec:
  replicas: 3
  selector:
    matchLabels:
      app: employee-app
  template:
    metadata:
      labels:
        app: employee-app
    spec:
      dnsPolicy: Default
      containers:
      - name: employee-spring-app
        image: <image>
        ports:
        - containerPort: 8080

What may be the issue here? any configuraiton I am missing?

I tried multiple times to change dnsPolicy and trying different configuraitons also but it seems I am missing something as the call if it goes through ingress it takes a lot of time, and if the call is directly from inside the applicaiton it is faster(200ms) so I am thinking if something is breaking at ingress level.

UPDATE 1 I tried looking at logs and I see 110 connect time out and a IP which does not exist in my nodes or LBaaS

1 Answer 1

0

This error 110 connect time is from Nginx because of the upstreams. you can fix the error by increasing proxy_read_timeout, but this is not the real solution. Because the original issue is some name/ip resolution. First check if the service is accessible via IP. Therefore, create the port-forwarding:

kubectl port-forward deployment/employee-app-deployment 8080 8080

and try to reach it by:

http://localhost:8080
  • If it is working fine, then the issue is from the dns or Nginx controller.

For checking if the issue is with the dns:

kubectl exec -i -t -n <NGINX-NAMESPACE> <NGINX-POD> -- cat /etc/resolv.conf

and make sure the dns is working fine.

It is very rare that the issue is from nginx controller, but you can reinstall it every time by helm.

  • If not, Take a look at upstreams that are Endpoints, which correspond to Pods that match the Selectors. you can view the endpoints for a service easily because they have the same name:

    kubectl get -o yaml endpoints employee-app-deployment

and check if the IPs are the same as your pods by something like:

kubectl get pod podname -o custom-columns=NAME:metadata.name,IP:status.podIP
3
  • I tired all the above stuff, the IP does exist, although one odd thing i notices is when I decreased the number of replicas and let only one pod run on one node it was working as expected, as in there was no issue, and as soon as pods are divided in multiple nodes again the problem shows up.
    – Raj
    Commented Jul 9 at 7:56
  • even by the port forwarding, it is working as expected?
    – ha36d
    Commented Jul 9 at 9:58
  • yes port forwarding is also working, the problem only happens when more nodes are in cluster and the pods are distributed among them, I tried having all those in a single node it works fine then
    – Raj
    Commented Jul 9 at 11:22

Not the answer you're looking for? Browse other questions tagged or ask your own question.