Apply NetworkPolicies
NetworkPolicies restrict pod-to-pod communication to only the traffic that is required. This is a recommended security hardening step for production deployments, particularly those requiring STIG compliance.
NetworkPolicies require a CNI plugin that supports them (e.g., Calico, Cilium, Weave, Antrea, or Canal). The default k3s CNI (Flannel) does not enforce NetworkPolicies. If your CNI does not support them, the policies will be created but have no effect.
Prerequisites
- A running NetFoundry Self-Hosted installation with the
supportandzitinamespaces - A Kubernetes CNI that supports NetworkPolicies
kubectlaccess to the cluster
Automated deployment
The quickstart installer can apply NetworkPolicies automatically when run with the -H (hardened) flag
on a BYO cluster (non-k3s):
CTRL_ADDR=<your-hostname> bash quickstart.sh -y -H
The installer will:
- Detect your CNI plugin and warn if NetworkPolicy support is not confirmed
- Apply default-deny ingress policies with allow-list rules for both namespaces
- Validate connectivity between components after applying
- Automatically roll back if connectivity checks fail
Manual deployment
To apply NetworkPolicies on an existing installation without re-running the installer:
./installers/network-policies.sh
Or apply the manifests directly:
kubectl apply -f installers/hardened/support-networkpolicies.yaml
kubectl apply -f installers/hardened/ziti-networkpolicies.yaml
What gets applied
Support namespace
| Policy | From | To | Port |
|---|---|---|---|
| Default deny | — | All pods | All (denied) |
| Logstash → Elasticsearch | logstash | elasticsearch | 9200 |
| Grafana → Elasticsearch | grafana | elasticsearch | 9200 |
| Kibana → Elasticsearch | kibana | elasticsearch | 9200 |
| ES inter-node | elasticsearch | elasticsearch | 9200, 9300 |
| ECK operator → Elasticsearch | elastic-system namespace | elasticsearch | All |
| ECK operator → Kibana | elastic-system namespace | kibana | All |
| Logstash → RabbitMQ | logstash | rabbitmq | 5672 |
| Ziti → RabbitMQ | ziti namespace | rabbitmq | 5672 |
| RabbitMQ → Logstash | rabbitmq | logstash | 5010 |
Ziti namespace
| Policy | From | To | Port |
|---|---|---|---|
| Default deny | — | All pods | All (denied) |
| Controller ingress | Any | ziti-controller | 1280 |
| Router → Controller | ziti-router | ziti-controller | 6262 |
| Router edge ingress | Any | ziti-router | 3022 |
| cert-manager | cert-manager namespace | All pods | All |
Customizing policies
The policy manifests are located at:
installers/hardened/support-networkpolicies.yamlinstallers/hardened/ziti-networkpolicies.yaml
To add custom rules (for example, allowing external access to Kibana or Grafana), create additional NetworkPolicy resources in the appropriate namespace:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-grafana-ingress
namespace: support
spec:
podSelector:
matchLabels:
app: grafana
policyTypes:
- Ingress
ingress:
- ports:
- protocol: TCP
port: 3000
Rolling back
To remove all NetworkPolicies and restore unrestricted communication:
kubectl delete -f installers/hardened/support-networkpolicies.yaml
kubectl delete -f installers/hardened/ziti-networkpolicies.yaml
Troubleshooting
If pods cannot communicate after applying policies, check which policies are active:
kubectl get networkpolicies -n support
kubectl get networkpolicies -n ziti
Verify that your CNI is actually enforcing policies. A common issue is that Flannel (default k3s CNI) accepts NetworkPolicy resources without enforcing them. To confirm enforcement, check whether the default-deny policy actually blocks traffic that is not explicitly allowed.
To re-run the automated deployment with connectivity validation:
./installers/network-policies.sh
The script will apply policies and test connectivity. If checks fail, it will roll back automatically and print instructions for manual investigation.