We have a simple k8s cluster, consisting of a master and a couple of workers running on Ubuntu 20.04 LTS.
The firewall is not enabled on any of them, because ufw (default tool for configuring iptables rules in ubuntu) is disabled and iptables does not contain any specific rules to filter inbound or outbound k8s cluster traffic.
The ufw tool is very handy, but it lacks zone management. We wanted a simple firewall, where it will be clearly stated that some ports and services will be open to the whole world, some will be open only for specific ip addresses and for some ip addresses (private network) all ports and services will be open. Therefore the zone management feature was a hard requirement and the reason why we opted for firewalld. Selected firewall should also protect k8s nodes against insecure configuration of workloads e.g. when k8s hostPort is configured or when k8s container interacts with docker socket.
In this post, we will show you how we set up a firewalld in k8s cluster to cover following:
- Secure K8s nodes with firewalld as iptables frontend
- Secure K8s CNI Flannel plugin
- Secure K8s CRI Docker plugin
Set up firewalld
- Public – open some ports and services to the whole world
- Trusted – open all ports and services for specific ip addresses (private network)
- Internal – open some ports and services for specific ip addresses
We used predefined zones from the firewalld. Thanks to the three-zone configuration, it is now very easy to open ports and services for another ip address. All you have to do is add this ip address to the internal or trusted zone.
After installing the firewalld on ubuntu, this service starts automatically. Firewalld has ssh in the public zone (which is the default zone) enabled, and it looks like this.
$ sudo firewall-cmd --list-all
services: dhcpv6-client ssh
# we do not need dhcpv6-client service
# remove dhcpv6-client from runtime and permanent configuration
$ sudo firewall-cmd --remove-service=dhcpv6-client
$ sudo firewall-cmd --permanent --remove-service=dhcpv6-client
You may notice that we did not specify a zone in the example. The default zone is public and we see that it has a default target. According to documentation it’s something like %%REJECT%% target. All packets that are not for ssh service, are rejected.
No interfaces or source IP addresses are specified, so this zone is open to the whole world. This gives the default public zone an exception, because other zones must have a specified source ip address or interface to become active.
You can also notice other possible zone settings, for example that icmp is enabled and therefore ping is not forbidden.
We use Ingress to expose services running in a k8s cluster. Therefore, we must enable http and https on all workers in the public zone.
# enable http and https in runtime and permanent configuration
worker:~$ sudo firewall-cmd --add-service=http --add-service=https
worker:~$ sudo firewall-cmd --permanent --add-service=http --add-service=https
For Kubernetes to work properly, you need to enable multiple ports. Open ports for Kubernetes CNI (Container Network Interface) are also required. We use Flannel plugin, which is simple and fully covers our requirements. For flannel, port 8472/udp is important if vxlan backend is used and 8285/udp for udp backend.
We had a bit easier work, because our cluster works on the private network 10.0.0.0/8. Therefore, we configured flannel to use this network with the –iface=[our private network interface] argument in kube-flannel container. Subsequently, we added this network to the trusted zone of the master and all workers, making this zone active. We could also use a private interface instead of a private network.
# add private network to runtime and permanent configuration of trusted zone
$ sudo firewall-cmd --zone=trusted --add-source=10.0.0.0/8
$ sudo firewall-cmd --permanent --zone=trusted --add-source=10.0.0.0/8
$ sudo firewall-cmd --zone=trusted --list-all
Target is ACCEPT, which says that we accept all packets that come from private 10.0.0.0/8 network. We have thus ensured the functioning of the kubernetes cluster. If we did not have a private network, we would have to add specific ip addresses to this zone or add specific ip addresses and ports to the internal zone.
In order to be able to access this cluster from some public ip address as well, we must enable port 6443/tcp (Kubernetes API server) on the master.
# remove default services in internal zone
master:~$ sudo firewall-cmd --permanent --zone=internal --remove-service=dhcpv6-client --remove-service=mdns --remove-service=samba-client --remove-service=ssh
# add k8s api server port and some ip address to internal zone
master:~$ sudo firewall-cmd --permanent --zone=internal --add-port=6443/tcp
master:~$ sudo firewall-cmd --permanent --zone=internal --add-source=SOME_IP
# add permanent configuration to runtime
master:~$ sudo firewall-cmd --reload
master:~$ sudo firewall-cmd --zone=internal --list-all
The only problem that has occurred here is that SOME_IP does not have access to the services and ports in public zone now. This is due to firewalld settings and the fact that both the internal and the public zone have default target, and when a packet with SOME_IP arrives, it will not find ssh service in the internal zone and since the target is the default, this packet will look in the next zone. It finds the public zone but there is also a default target and firewalld will reject this packet.
There are two ways to fix this:
• enable AllowZoneDrifting in the firewalld configuration, which is not recommended
• add ssh service to the internal zone as well, which we prefer
# add ssh service also to runtime and permanent configuration of internal zone
master:~$ sudo firewall-cmd --zone=internal --add-service=ssh
master:~$ sudo firewall-cmd --permanent --zone=internal --add-service=ssh
Secure K8s CNI Flannel plugin
There are several ways to expose services in Kubernetes. One of them is hostPort. This functionality is also available in flannel via the official plugin portmap. This plugin uses iptables to map a host port to a container port. But it uses a prerouting chain. In this case, the rules are applied even earlier as firewalld rules, and a security hole may arise. If we want to prevent that, this plugin gave us the opportunity to do it. In the ConfigMap kube-flannel-cfg it is necessary to add the parameter “conditionsV4” to the portmap plugin and restart kube-flannel-ds DaemonSet.
“conditionsV4”: [“-s”, “10.0.0.0/8”]
If we add such a condition, the portmap plugin will add it to each iptables rule and in that case it will only expose it on the private network 10.0.0.0/8. If we want, we can add other ip addresses separated by a comma, e.g. “conditionsV4”: [“-s”, “10.0.0.0/8,SOME_IP”]. Of course, it is possible to add other iptables rules.
Secure K8s CRI Docker plugin
Here is the same problem as with hostPort. This problem arises if we want to run the container not in kubernetes but directly in the docker by command docker run with -p or -P, or when k8s container interacts directly with the docker socket.
Docker CRI (Container Runtime Interface) adds iptables rules to the prerouting chain DOCKER, and thus applies its rules before any firewalld rules. This may caused a security hole. Docker knew about this problem and therefore created the chain called DOCKER-USER, where we can add own rules. We can modify this iptables chain using the firewalld direct interface and solve the security problem as is described here or here.
# add the DOCKER-USER chain to firewalld
$ sudo firewall-cmd --permanent --direct --add-chain ipv4 filter DOCKER-USER
# allow connection for docker containers to the outside world
$ sudo firewall-cmd --permanent --direct --add-rule ipv4 filter DOCKER-USER 0 -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
# drop all other traffic to DOCKER-USER
$ sudo firewall-cmd --permanent --direct --add-rule ipv4 filter DOCKER-USER 10 -j DROP
# Optional DOCKER-USER chain settings
## add your docker subnets to allow container communication
$ sudo firewall-cmd --permanent --direct --add-rule ipv4 filter DOCKER-USER 0 -s 172.17.0.0/16 -j RETURN
## add some private IP addresses
$ sudo firewall-cmd --permanent --direct --add-rule ipv4 filter DOCKER-USER 0 -s 10.0.0.0/8 -j ACCEPT
## add some public ports, use --ctorigdstport and --ctdir as is described on https://serverfault.com/a/933803
$ sudo firewall-cmd --permanent --direct --add-rule ipv4 filter DOCKER-USER 0 -p tcp -m conntrack --ctorigdstport 65535 --ctdir ORIGINAL -j ACCEPT
# restart services
$ sudo systemctl restart firewalld
$ sudo systemctl restart docker
When docker is restarted it adds -A FORWARD -j DOCKER-USER rule to iptables which ensure that created DOCKER-USER rules will be applied. This rule was not added in some cases and therefore we decided to add this rule to the docker service unit configuration file as follows:
$ cat /etc/systemd/system/docker.service.d/docker-user.conf
# delete forward rule to prevent duplicates, ignore missing
ExecStartPost=-/bin/bash -c ‘iptables -D FORWARD -j DOCKER-USER’
# insert forward rule
ExecStartPost=/bin/bash -c ‘iptables -I FORWARD -j DOCKER-USER’