Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Failing to pick up health check from readiness probe #241

Closed
sonu27 opened this issue Apr 26, 2018 · 18 comments
Closed

Failing to pick up health check from readiness probe #241

sonu27 opened this issue Apr 26, 2018 · 18 comments
Labels
kind/documentation Categorizes issue or PR as related to documentation.

Comments

@sonu27
Copy link
Contributor

sonu27 commented Apr 26, 2018

When I create a GCE ingress, Google Load Balancer does not set the health check from the readiness probe. According to the docs (Ingress GCE health checks) it should pick it up.

Expose an arbitrary URL as a readiness probe on the pods backing the Service.

Any ideas why?

Deployment:

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: frontend-prod
  labels:
    app: frontend-prod
spec:
  selector:
    matchLabels:
      app: frontend-prod
  replicas: 3
  strategy:
    rollingUpdate:
      maxSurge: 1
      maxUnavailable: 1
    type: RollingUpdate
  template:
    metadata:
      labels:
        app: frontend-prod
    spec:
      imagePullSecrets:
        - name: regcred
      containers:
      - image: app:latest
        readinessProbe:
          httpGet:
            path: /healthcheck
            port: 3000
          initialDelaySeconds: 15
          periodSeconds: 5
        name: frontend-prod-app
      - env:
        - name: PASSWORD_PROTECT
          value: "1"
        image: nginx:latest
        readinessProbe:
          httpGet:
            path: /health
            port: 80
          initialDelaySeconds: 5
          periodSeconds: 5
        name: frontend-prod-nginx

Service:

apiVersion: v1
kind: Service
metadata:
  name: frontend-prod
  labels:
    app: frontend-prod
spec:
  type: NodePort
  ports:
  - port: 80
    targetPort: 80
    protocol: TCP
    name: http
  selector:
    app: frontend-prod

Ingress:

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: frontend-prod-ingress
  annotations:
    kubernetes.io/ingress.global-static-ip-name: frontend-prod-ip
spec:
  tls:
    - secretName: testsecret
  backend:
    serviceName: frontend-prod
    servicePort: 80
@nicksardo
Copy link
Contributor

There are several caveats. The health check should not already exist as it won't overwrite settings. Furthermore, the pods need to exist at the time of ingress creation.

@sonu27
Copy link
Contributor Author

sonu27 commented Apr 27, 2018

@nicksardo Yes I know that it will not overwrite settings of the ingress.

I created a deployment and service, and waiting for the Pods to go green in GKE (when the readiness probes are passing) and then I created the ingress, and it just uses the default / (200) rather than from the readiness probe.

Anything else I can provide to prove this is a bug?

@briansneddon
Copy link

briansneddon commented Apr 27, 2018

@sonu27 In my experience for it to work the podspec must also include containerPort.

e.g.

    spec:
      containers:
      - name: nginx
        image: nginx:1.7.9
        ports:
        - containerPort: 80
@sonu27
Copy link
Contributor Author

sonu27 commented Apr 30, 2018

@briansneddon dude! Thanks so much. That did the trick.

Unless I'm mistaken, it doesn't say this anywhere, so I think the docs should really be updated.

@nicksardo nicksardo added the kind/documentation Categorizes issue or PR as related to documentation. label May 4, 2018
@nicksardo
Copy link
Contributor

Feel free to send a quick PR.

@ldelossa
Copy link

@sonu27 should that documentation say

The container's containerPort field must be defined

?

@iftachsc
Copy link

iftachsc commented Feb 9, 2019

hitting the same.
i have a readiness probe on different port that the application. a container port is set for the readiness port as well. i even addead a nodeport for this readiness port. doesnt help. HTTP loadbalancer has healthcheck of HTTP on root path / in the nodeport that match the internal port of the service set in the ingress. the port im talking about is 15020. see below. all work well with TCP Loadbalancer (service type: LoadBalancer)

my ingress:

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
ingress.kubernetes.io/backends: '{"k8s1-6958b363-istio-system-istio-ingressgateway-80-6262bb6e":"Unknown"}'
ingress.kubernetes.io/forwarding-rule: k8s-fw-istio-system-neg-istio-ingressgateway--6958b363ba6aff2a
ingress.kubernetes.io/target-proxy: k8s-tp-istio-system-neg-istio-ingressgateway--6958b363ba6aff2a
ingress.kubernetes.io/url-map: k8s-um-istio-system-neg-istio-ingressgateway--6958b363ba6aff2a
creationTimestamp: 2019-02-09T18:22:41Z
generation: 1
name: neg-istio-ingressgateway
namespace: istio-system
resourceVersion: "10456273"
selfLink: /apis/extensions/v1beta1/namespaces/istio-system/ingresses/neg-istio-ingressgateway
uid: b03b31d7-2c97-11e9-a4a9-42010a04000a
spec:
backend:
serviceName: istio-ingressgateway
servicePort: 80

my service:

apiVersion: v1
kind: Service
metadata:
annotations:
cloud.google.com/neg: '{"ingress": true}'
cloud.google.com/neg-status: '{"network_endpoint_groups":{"80":"k8s1-6958b363-istio-system-istio-ingressgateway-80-6262bb6e"},"zones":["us-central1-a","us-central1-b","us-central1-c"]}'
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"creationTimestamp":"2019-01-17T08:10:10Z","labels":{"addonmanager.kubernetes.io/mode":"Reconcile","app":"istio-ingressgateway","chart":"gateways-1.0.3","heritage":"Tiller","istio":"ingressgateway","k8s-app":"istio","kubernetes.io/cluster-service":"true","release":"istio"},"name":"istio-ingressgateway","namespace":"istio-system","resourceVersion":"9621470","selfLink":"/api/v1/namespaces/istio-system/services/istio-ingressgateway","uid":"4f10e4db-1a2f-11e9-8ca4-42010a040006"},"spec":{"clusterIP":"10.160.0.203","externalTrafficPolicy":"Cluster","ports":[{"name":"http2","nodePort":31380,"port":80,"protocol":"TCP","targetPort":80},{"name":"https","nodePort":31390,"port":443,"protocol":"TCP","targetPort":443}],"selector":{"app":"istio-ingressgateway","istio":"ingressgateway"},"sessionAffinity":"None","type":"LoadBalancer"},"status":{"loadBalancer":{"ingress":[{"ip":"35.224.239.229"}]}}}
creationTimestamp: 2019-01-17T08:10:10Z
labels:
addonmanager.kubernetes.io/mode: Reconcile
app: istio-ingressgateway
chart: gateways-1.0.3
heritage: Tiller
istio: ingressgateway
k8s-app: istio
kubernetes.io/cluster-service: "true"
release: istio
name: istio-ingressgateway
namespace: istio-system
resourceVersion: "10455696"
selfLink: /api/v1/namespaces/istio-system/services/istio-ingressgateway
uid: 4f10e4db-1a2f-11e9-8ca4-42010a040006
spec:
clusterIP: 10.160.0.203
externalTrafficPolicy: Cluster
ports:

  • name: http2
    nodePort: 31380
    port: 80
    protocol: TCP
    targetPort: 80
  • name: https
    nodePort: 31390
    port: 443
    protocol: TCP
    targetPort: 443
  • name: status-port
    nodePort: 30905
    port: 15020
    protocol: TCP
    targetPort: 15020
    selector:
    app: istio-ingressgateway
    istio: ingressgateway
    sessionAffinity: None
    type: LoadBalancer

my deployment: (DS)

apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
creationTimestamp: 2019-02-07T13:54:40Z
generation: 3
labels:
addonmanager.kubernetes.io/mode: Reconcile
app: istio-ingressgateway
chart: gateways-1.0.3
heritage: Tiller
istio: ingressgateway
k8s-app: istio
release: istio
name: istio-ingressgateway-ds
namespace: istio-system
resourceVersion: "10452455"
selfLink: /apis/extensions/v1beta1/namespaces/istio-system/daemonsets/istio-ingressgateway-ds
uid: ea116380-2adf-11e9-9f2c-42010a040009
spec:
revisionHistoryLimit: 10
selector:
matchLabels:
app: istio-ingressgateway
istio: ingressgateway
template:
metadata:
annotations:
scheduler.alpha.kubernetes.io/critical-pod: ""
seccomp.security.alpha.kubernetes.io/pod: docker/default
sidecar.istio.io/inject: "false"
creationTimestamp: null
labels:
app: istio-ingressgateway
istio: ingressgateway
spec:
containers:
- args:
- proxy
- router
- -v
- "2"
- --discoveryRefreshDelay
- 1s
- --drainDuration
- 45s
- --parentShutdownDuration
- 1m0s
- --connectTimeout
- 10s
- --serviceCluster
- istio-ingressgateway
- --zipkinAddress
- zipkin:9411
- --proxyAdminPort
- "15000"
- --statusPort
- "15020"
- --controlPlaneAuthPolicy
- NONE
- --discoveryAddress
- istio-pilot:8080
env:
- name: POD_NAME
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.namespace
- name: INSTANCE_IP
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: status.podIP
- name: ISTIO_META_POD_NAME
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.name
image: gcr.io/gke-release/istio/proxyv2:1.0.3-gke.0
imagePullPolicy: IfNotPresent
name: istio-proxy
ports:
- containerPort: 15020
name: status-port
protocol: TCP
- containerPort: 80
protocol: TCP
- containerPort: 443
protocol: TCP
- containerPort: 31400
protocol: TCP
- containerPort: 15011
protocol: TCP
- containerPort: 8060
protocol: TCP
- containerPort: 853
protocol: TCP
- containerPort: 15030
protocol: TCP
- containerPort: 15031
protocol: TCP
- containerPort: 15090
name: http-envoy-prom
protocol: TCP
readinessProbe:
failureThreshold: 30
httpGet:
path: /healthz/ready
port: 15020
scheme: HTTP
initialDelaySeconds: 1
periodSeconds: 2
successThreshold: 1
timeoutSeconds: 1
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /etc/certs
name: istio-certs
readOnly: true
- mountPath: /etc/istio/ingressgateway-certs
name: ingressgateway-certs
readOnly: true
- mountPath: /etc/istio/ingressgateway-ca-certs
name: ingressgateway-ca-certs
readOnly: true
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
serviceAccount: istio-ingressgateway-service-account
serviceAccountName: istio-ingressgateway-service-account
terminationGracePeriodSeconds: 30
volumes:
- name: istio-certs
secret:
defaultMode: 420
optional: true
secretName: istio.istio-ingressgateway-service-account
- name: ingressgateway-certs
secret:
defaultMode: 420
optional: true
secretName: istio-ingressgateway-certs
- name: ingressgateway-ca-certs
secret:
defaultMode: 420
optional: true
secretName: istio-ingressgateway-ca-certs
templateGeneration: 3
updateStrategy:
type: OnDelete

FrankPetrilli added a commit to FrankPetrilli/ingress-gce that referenced this issue Feb 13, 2019
Import limitations from examples/health-checks/README.md and add new limitation mentioned in kubernetes#241.
@g00nix
Copy link

g00nix commented Feb 17, 2019

@sonu27 should that documentation say

The container's containerPort field must be defined

?

Give @ldelossa e medal, please. You waste hours reading gitlab bugs because the documentation doesn't say that one field is required.

@psalaberria002
Copy link

@iftachsc did you find a solution? I am also having trouble understanding how that would work in practice.

@cloudgrimm
Copy link

@psalaberria002 Please if you get a work around on this it would be helpful to share as this is getting confusing by the day.

@IbbyBenali
Copy link

Agree with @cloudgrimm and @psalaberria002.. having trouble understanding how this exactly works.

@iftachsc
Copy link

@iftachsc did you find a solution? I am also having trouble understanding how that would work in practice.

nop

gdbelvin added a commit to gdbelvin/keytransparency that referenced this issue Jan 3, 2020
GCE Ingress Controllers require HTTP 200 to be served from '/'
kubernetes/ingress-gce#241 (comment)
gdbelvin added a commit to gdbelvin/keytransparency that referenced this issue Jan 3, 2020
GCE Ingress Controllers require HTTP 200 to be served from '/'
kubernetes/ingress-gce#241 (comment)
gdbelvin added a commit to gdbelvin/keytransparency that referenced this issue Jan 6, 2020
GCE Ingress Controllers require HTTP 200 to be served from '/'
kubernetes/ingress-gce#241 (comment)
gdbelvin added a commit to google/keytransparency that referenced this issue Jan 6, 2020
* /heathz and /readyz endpoints

* Configure k8 to use readinessProbes

* Use readyz in docker_compose

* Use readyz in docker_compose

* Use readyz in k8 test

* add parallel build

* add parallel build

* Add '/' handler to metrics addr

* Serve http 200 at '/'

GCE Ingress Controllers require HTTP 200 to be served from '/'
kubernetes/ingress-gce#241 (comment)

* Set k8 probe scheme to https

* Monitor dial logs

* Set monitor's KT url
@Ozrlz
Copy link

Ozrlz commented Feb 19, 2020

hitting the same.
i have a readiness probe on different port that the application. a container port is set for the readiness port as well. i even addead a nodeport for this readiness port. doesnt help. HTTP loadbalancer has healthcheck of HTTP on root path / in the nodeport that match the internal port of the service set in the ingress. the port im talking about is 15020. see below. all work well with TCP Loadbalancer (service type: LoadBalancer)

my ingress:

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
ingress.kubernetes.io/backends: '{"k8s1-6958b363-istio-system-istio-ingressgateway-80-6262bb6e":"Unknown"}'
ingress.kubernetes.io/forwarding-rule: k8s-fw-istio-system-neg-istio-ingressgateway--6958b363ba6aff2a
ingress.kubernetes.io/target-proxy: k8s-tp-istio-system-neg-istio-ingressgateway--6958b363ba6aff2a
ingress.kubernetes.io/url-map: k8s-um-istio-system-neg-istio-ingressgateway--6958b363ba6aff2a
creationTimestamp: 2019-02-09T18:22:41Z
generation: 1
name: neg-istio-ingressgateway
namespace: istio-system
resourceVersion: "10456273"
selfLink: /apis/extensions/v1beta1/namespaces/istio-system/ingresses/neg-istio-ingressgateway
uid: b03b31d7-2c97-11e9-a4a9-42010a04000a
spec:
backend:
serviceName: istio-ingressgateway
servicePort: 80

my service:

apiVersion: v1
kind: Service
metadata:
annotations:
cloud.google.com/neg: '{"ingress": true}'
cloud.google.com/neg-status: '{"network_endpoint_groups":{"80":"k8s1-6958b363-istio-system-istio-ingressgateway-80-6262bb6e"},"zones":["us-central1-a","us-central1-b","us-central1-c"]}'
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"creationTimestamp":"2019-01-17T08:10:10Z","labels":{"addonmanager.kubernetes.io/mode":"Reconcile","app":"istio-ingressgateway","chart":"gateways-1.0.3","heritage":"Tiller","istio":"ingressgateway","k8s-app":"istio","kubernetes.io/cluster-service":"true","release":"istio"},"name":"istio-ingressgateway","namespace":"istio-system","resourceVersion":"9621470","selfLink":"/api/v1/namespaces/istio-system/services/istio-ingressgateway","uid":"4f10e4db-1a2f-11e9-8ca4-42010a040006"},"spec":{"clusterIP":"10.160.0.203","externalTrafficPolicy":"Cluster","ports":[{"name":"http2","nodePort":31380,"port":80,"protocol":"TCP","targetPort":80},{"name":"https","nodePort":31390,"port":443,"protocol":"TCP","targetPort":443}],"selector":{"app":"istio-ingressgateway","istio":"ingressgateway"},"sessionAffinity":"None","type":"LoadBalancer"},"status":{"loadBalancer":{"ingress":[{"ip":"35.224.239.229"}]}}}
creationTimestamp: 2019-01-17T08:10:10Z
labels:
addonmanager.kubernetes.io/mode: Reconcile
app: istio-ingressgateway
chart: gateways-1.0.3
heritage: Tiller
istio: ingressgateway
k8s-app: istio
kubernetes.io/cluster-service: "true"
release: istio
name: istio-ingressgateway
namespace: istio-system
resourceVersion: "10455696"
selfLink: /api/v1/namespaces/istio-system/services/istio-ingressgateway
uid: 4f10e4db-1a2f-11e9-8ca4-42010a040006
spec:
clusterIP: 10.160.0.203
externalTrafficPolicy: Cluster
ports:

  • name: http2
    nodePort: 31380
    port: 80
    protocol: TCP
    targetPort: 80
  • name: https
    nodePort: 31390
    port: 443
    protocol: TCP
    targetPort: 443
  • name: status-port
    nodePort: 30905
    port: 15020
    protocol: TCP
    targetPort: 15020
    selector:
    app: istio-ingressgateway
    istio: ingressgateway
    sessionAffinity: None
    type: LoadBalancer

my deployment: (DS)

apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
creationTimestamp: 2019-02-07T13:54:40Z
generation: 3
labels:
addonmanager.kubernetes.io/mode: Reconcile
app: istio-ingressgateway
chart: gateways-1.0.3
heritage: Tiller
istio: ingressgateway
k8s-app: istio
release: istio
name: istio-ingressgateway-ds
namespace: istio-system
resourceVersion: "10452455"
selfLink: /apis/extensions/v1beta1/namespaces/istio-system/daemonsets/istio-ingressgateway-ds
uid: ea116380-2adf-11e9-9f2c-42010a040009
spec:
revisionHistoryLimit: 10
selector:
matchLabels:
app: istio-ingressgateway
istio: ingressgateway
template:
metadata:
annotations:
scheduler.alpha.kubernetes.io/critical-pod: ""
seccomp.security.alpha.kubernetes.io/pod: docker/default
sidecar.istio.io/inject: "false"
creationTimestamp: null
labels:
app: istio-ingressgateway
istio: ingressgateway
spec:
containers:

  • args:
  • proxy
  • router
  • -v
  • "2"
  • --discoveryRefreshDelay
  • 1s
  • --drainDuration
  • 45s
  • --parentShutdownDuration
  • 1m0s
  • --connectTimeout
  • 10s
  • --serviceCluster
  • istio-ingressgateway
  • --zipkinAddress
  • zipkin:9411
  • --proxyAdminPort
  • "15000"
  • --statusPort
  • "15020"
  • --controlPlaneAuthPolicy
  • NONE
  • --discoveryAddress
  • istio-pilot:8080
    env:
  • name: POD_NAME
    valueFrom:
    fieldRef:
    apiVersion: v1
    fieldPath: metadata.name
  • name: POD_NAMESPACE
    valueFrom:
    fieldRef:
    apiVersion: v1
    fieldPath: metadata.namespace
  • name: INSTANCE_IP
    valueFrom:
    fieldRef:
    apiVersion: v1
    fieldPath: status.podIP
  • name: ISTIO_META_POD_NAME
    valueFrom:
    fieldRef:
    apiVersion: v1
    fieldPath: metadata.name
    image: gcr.io/gke-release/istio/proxyv2:1.0.3-gke.0
    imagePullPolicy: IfNotPresent
    name: istio-proxy
    ports:
  • containerPort: 15020
    name: status-port
    protocol: TCP
  • containerPort: 80
    protocol: TCP
  • containerPort: 443
    protocol: TCP
  • containerPort: 31400
    protocol: TCP
  • containerPort: 15011
    protocol: TCP
  • containerPort: 8060
    protocol: TCP
  • containerPort: 853
    protocol: TCP
  • containerPort: 15030
    protocol: TCP
  • containerPort: 15031
    protocol: TCP
  • containerPort: 15090
    name: http-envoy-prom
    protocol: TCP
    readinessProbe:
    failureThreshold: 30
    httpGet:
    path: /healthz/ready
    port: 15020
    scheme: HTTP
    initialDelaySeconds: 1
    periodSeconds: 2
    successThreshold: 1
    timeoutSeconds: 1
    resources: {}
    terminationMessagePath: /dev/termination-log
    terminationMessagePolicy: File
    volumeMounts:
  • mountPath: /etc/certs
    name: istio-certs
    readOnly: true
  • mountPath: /etc/istio/ingressgateway-certs
    name: ingressgateway-certs
    readOnly: true
  • mountPath: /etc/istio/ingressgateway-ca-certs
    name: ingressgateway-ca-certs
    readOnly: true
    dnsPolicy: ClusterFirst
    restartPolicy: Always
    schedulerName: default-scheduler
    securityContext: {}
    serviceAccount: istio-ingressgateway-service-account
    serviceAccountName: istio-ingressgateway-service-account
    terminationGracePeriodSeconds: 30
    volumes:
  • name: istio-certs
    secret:
    defaultMode: 420
    optional: true
    secretName: istio.istio-ingressgateway-service-account
  • name: ingressgateway-certs
    secret:
    defaultMode: 420
    optional: true
    secretName: istio-ingressgateway-certs
  • name: ingressgateway-ca-certs
    secret:
    defaultMode: 420
    optional: true
    secretName: istio-ingressgateway-ca-certs
    templateGeneration: 3
    updateStrategy:
    type: OnDelete

@iftachsc I have the exact use case and I've faced the exact same issue. Did you find a way to have the http traffic deployed to the istio-ingressgateway when the healthchecks are in a different port than the application port?

I've recreated everything from scratch and the healthchecks are pointing to the default /

@gvko
Copy link

gvko commented Apr 13, 2020

@sonu27 In my experience for it to work the podspec must also include containerPort.

e.g.

    spec:
      containers:
      - name: nginx
        image: nginx:1.7.9
        ports:
        - containerPort: 80

Didn't work for me. I've got the following spec for my container deployment:

    spec:
      containers:
        - name: xxx
          image: xxx
          imagePullPolicy: Always
          ports:
            - containerPort: 3030   # added this for the Ingress to pick up the probe on /healthz
          livenessProbe:
            httpGet:
              path: /healthz
              port: 3030
              httpHeaders:
                - name: Kubernetes-Probe
                  value: Liveness
            initialDelaySeconds: 5
            periodSeconds: 5
          readinessProbe:
            httpGet:
              path: /healthz
              port: 3030
              httpHeaders:
                - name: Kubernetes-Probe
                  value: Readiness
            initialDelaySeconds: 5
            periodSeconds: 5

the service spec:

spec:
  ports:
    - protocol: "TCP"
      port: 80
      targetPort: 3030
  selector:
    app: xxx
  type: "NodePort"

and the ingress:

spec:
  rules:
    - host: "api.dev.my-site.com"
      http:
        paths:
          - backend:
              serviceName: xxx-service-dev
              servicePort: 80
@jpbochi
Copy link

jpbochi commented Apr 14, 2020

@gvko If I recall correctly, I only managed to fix this issue in my cluster after I went to https://console.cloud.google.com/compute/healthChecks, manually deleted some wrong check, and run kubectl apply again.

@gvko
Copy link

gvko commented Apr 14, 2020

@jpbochi , I tried that already before I wrote here, but it didn't help. I tried both deleting the root HC (pointing to /) but it wouldn't get removed.
I also tried editing the LB itself, the same way, but once I remove the root HC, the backends are marked unhealthy.
And if I would apply the kubectl apply the ingress again, the HCs would get overwritten to their original values again.
So, eventually, I just left the application with having 2 HC endpoints: /healthz and / - one for the readiness probe and one for the ingress...

@VGerris
Copy link

VGerris commented May 14, 2020

thanks for the tip. I had the same issue and defined a readiness and liveness probe and redeployed the ingress but it didn't work. Since it worked perfectly with following this:

https://cloud.google.com/kubernetes-engine/docs/how-to/load-balance-ingress

without any additional firewall or loadblanacing config, I figured that it must be the /healthz and / not giving 200. Fixed that end everything started to work, including certificates.
makes one wonder if a bug should be filed for readiness/liveness check.
Anyway, glad this works, I'll make an effort to at least have documentation improved.

@pabloxtiyo
Copy link

This saves my life too, thank you!!!!!!!!!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/documentation Categorizes issue or PR as related to documentation.