Skip to main content


In certain industries it is very important to adhere to certain regulatory requirements. Part of those requirements is the ability to define policies on different levels of IT operation both for technical and organizational processes. VCPS includes several technologies, partly as base Kubernetes services and partly as additional cluster extensions to enable organizations to define and enforce those policies.

Kubernetes Plugins

Every VCPS base cluster contains the additional services and configuration that empowers operators and users to fulfill different requirements for organizational and technical security.


Cilium is the CNI plugin used in VCPS. It uses the modern eBPF facilities in newer Linux kernels to configure complex network data paths for communication of containers running on different nodes. The components of Cilium and their interactions are illustrated in the following image (source):

The most important elements are:

  • cilium-agent provides an API to interact with Cilium and updates the eBPF programs to reflect the current state of packet routing requirements.
  • cilium-cni integrates with the container runtime and is invoked whenever a container is scheduled or terminated. It interacts with the Cilium API to make it aware of those changes.
  • cli client can be used to interact with the Cilium API on the command line to inspect current state or make changes.

Within Kubernetes Cilium needs to store certain information in a global key-value store. By default Kubernetes Custom Resources are used but using a separate etcd cluster is also possible.

Using these elements Cilium is able to provide a mature set features, among them the following:

Package routing between nodes

A central task of Cilium is ensuring network connectivity between containers running on different nodes in the cluster. In VCPS base clusters Cilium is run in encapsulation mode by default. In this mode Cilium uses the vxlan protocol to encapsulate network packets to and from containers in UDP packets that are sent directly between the involved nodes running those containers. The eBPF programs on the nodes then ensure that the encapsulated packet gets unwrapped and delivered to the correct container.

Using encapsulation simplifies deployments in cloud environments but is not the only routing mode supported by Cilium. It is also possible to disable tunnelling completely. In this mode the operator has to ensure that the network is capable of forwarding IP traffic using IPs given to containers and the Linux kernel on the nodes must be aware of how to forward packets addressed to container IPs.

Using this mode it is possible for operators to uniquely identify sources and targets of packets between containers on a IP level. This allows for observability and policy enforcement using the same tools that are already used for monitoring and controlling the rest of the network.

Transparent network encryption

VCPS configures transparent encryption in Cilium using WireGuard. With this nodes create WireGuard tunnels between each other using encryption keys generated by Cilium and shared via the global key-value store. This means that all container traffic between nodes is encrypted and cannot be read by an attacker listening to traffic in the network. The encryption is transparent for the involved containers which means no special handling on the application layer is necessary.

Network Policies

Cilium supports enforcement of Kubernetes NetworkPolicies. The native Kubernetes network policies implement traffic control on OSI layer 3 or 4 within the cluster and look like this:

kind: NetworkPolicy
name: test-network-policy
namespace: default
role: db
- Ingress
- Egress
- from:
- ipBlock:
- namespaceSelector:
project: myproject
- podSelector:
role: frontend
- protocol: TCP
port: 6379
- to:
- ipBlock:
- protocol: TCP
port: 5978

Network policy can specify rules for ingress and egress. Ingress rules concern packets whose destination is the given container while egress rules are applied to packets sent by the given container. If there are no network policies defined for a container it is non-isolated and can send or receive packets without restrictions.

Ingress rules normally define from rules that describe which sources should be able to send packets to the given container. They can select packets by source IP, namespace of source container or pod of source container. Using the ports key packets can be restricted to certain UDP or TCP ports.

Egress rules can define network targets for packets sent by the container in the to key. The same specifications as in the ingress from rules are available. Allowed target ports for packets originating in the container can be specified in the ports key.

In addition to the default Kubernetes network policies Cilium supports CiliumNetworkPolicies that support additional filter semantics on OSI layers 3-7. With this it is possible for example to restrict HTTP connections to specific paths or methods. Depending on the use case or requirements it is possible to define complex rules for how containers within the cluster are able to communicate with each other or the outside world.

Cert Manager

When operating complex or public facing applications it is often necessary to also operate a public key infrastructure (PKI) that provides certificates and encryption keys to applications for securing the communication between them or between them and the application users. To ease the operational burden of managing such a PKI VCPS uses cert-manager. cert-manager is a Kubernetes native application that simplifies creation and management of certificates. The following image illustrates its operation (source):

Overview over cert-manager operation (from: venafi-tppexample.comwww.example.comIssuer: letsencrypt-prodvenafi-as-a-servicesigned keypairvenafi-tpp

cert-manager can be configured to use one or more of the supported Issuers. Using this issuer it creates certificates and encryption keys according to the specified Certificate custom resources. It will also ensure that those certificates are renewed automatically a certain time before they expire. The actual certificate and key data is saved to Kubernetes Secrets and can then be used by containers within the cluster.

Available issuers are among others:

  • CA takes the certificate and private key of a CA and uses those to create and sign new certificates. The CA certificate and private key has to be added to the cluster by the operator.
  • ACME uses the ACME protocol to request certificates for a given domain. This can be used to request public-facing certificates from Let's Encrypt.


During cluster operation sensitive data like certificates and encryption keys need to be managed and made available to containers running in the cluster. Within Kubernetes such sensitive data is stored in Secrets. To improve security secret data is never written unencrypted to disk, instead Kubernetes is configured to use encryption-at-rest.

There are several different options for the algorithm used to encrypt the data, among them:

  • XSalsa20 and Poly1305
  • AES-GCM with random nonce
  • AES-CBC with PKCS#7 padding

The keys used for encryption can be generated during cluster deployment, provided by the user or requested from a supported KMS provider.