You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
And so we expect Argo to work, with nothing being denied. (We have a log & deny all rule at the end too.)
Current Behavior
From time to time (like once a month for a cluster), randomly, on rare occasions not coinciding with new calico-node or Argo pods, we will see a burst of 3 of blocked Argo flows spaced roughly 100 seconds apart e.g. 1 at 4:57:39 pm, 1 at 4:59:19 pm, 1 at 5:01:00 pm.
These blocked flows report the inverse of the flow we'd normally expect.
e.g. Blocked: argocd-redis-ha-server:26379 --> argocd-redis-ha-haproxy:40962
Expected flow: argocd-redis-ha-haproxy:40962 --> argocd-redis-ha-server:26379
e.g. Blocked: argocd-redis-ha-server:6379 --> argocd-redis-ha-proxy:51418
Expected flow: argocd-redis-ha-proxy:51418 --> argocd-redis-ha-server:6379
I don't see anything in the Calico pod logs out of the ordinary. My understanding of networking is weak, but it feels like Calico which should be stateful, is potentially losing track of the state of the network flows? Is that possible? Or are there any other theories?
Possible Solution
Steps to Reproduce (for bugs)
Context
Your Environment
Calico version v3.27.0
Orchestrator version (e.g. kubernetes, mesos, rkt): EKS with Kubelet
v1.28.8-eks-ae9a62a
Operating System and version: Amazon Linux 2, 5.10.217-205.860.amzn2.x86_64
Link to your project (optional):
The text was updated successfully, but these errors were encountered:
Yes, Calico is a stateful firewall, we track connections in the kernel's connection tracking "conntrack" table. You can see conntrack entries with conntrack -L to list all or conntrack -E to watch for changes.
The fact that the denied packets are in the reverse direction suggests that there was a previous connection that was being tracked but it was cleaned up. This could be for a few reasons:
The connection was closed and these are retransmitted FIN packets at the end of the connection.
The connection was silent for a very long time and the connection tracking entry timed out. The timeout is controlled with several sysctl settings:
The net.netfilter.nf_conntrack_tcp_timeout_established timeout is the one for connections that were fully established. It is typically very long (days) but connections that are silent for a long time do hit it.
The connection tracking entry was deliberately removed. Calico does this when a local pod is torn down to prevent a later pod with same IP from re-using connection tracking entries.
Expected Behavior
We have Argo CD running in numerous Kubernetes clusters. This includes:
argocd-redis-ha-server
StatefulSet pod withredis
container listening on 6379argocd-redis-ha-server
StatefulSet pod withsentinel
container listening on 26379argocd-redis-ha-haproxy
ReplicaSet pods withredis
container listening on ports 6379 and 9101, fronted by a Kubernetes serviceWe have Calico NetworkPolicies in place to allow the ingress to these ports, for example:
And so we expect Argo to work, with nothing being denied. (We have a log & deny all rule at the end too.)
Current Behavior
From time to time (like once a month for a cluster), randomly, on rare occasions not coinciding with new
calico-node
or Argo pods, we will see a burst of 3 of blocked Argo flows spaced roughly 100 seconds apart e.g. 1 at 4:57:39 pm, 1 at 4:59:19 pm, 1 at 5:01:00 pm.These blocked flows report the inverse of the flow we'd normally expect.
e.g. Blocked:
argocd-redis-ha-server:26379 --> argocd-redis-ha-haproxy:40962
Expected flow:
argocd-redis-ha-haproxy:40962 --> argocd-redis-ha-server:26379
e.g. Blocked:
argocd-redis-ha-server:6379 --> argocd-redis-ha-proxy:51418
Expected flow:
argocd-redis-ha-proxy:51418 --> argocd-redis-ha-server:6379
I don't see anything in the Calico pod logs out of the ordinary. My understanding of networking is weak, but it feels like Calico which should be stateful, is potentially losing track of the state of the network flows? Is that possible? Or are there any other theories?
Possible Solution
Steps to Reproduce (for bugs)
Context
Your Environment
v1.28.8-eks-ae9a62a
The text was updated successfully, but these errors were encountered: