Skip to content
Was this article helpful?

Policy based routing support for multiple Kubernetes clusters

When you are using a single Citrix ADC to load balance multiple Kubernetes clusters, the Citrix ingress controller adds pod CIDR networks through static routes. These routes establish networking connectivity between Kubernetes pods and Citrix ADC. However, when the pod CIDRs overlap there may be route conflicts. Citrix ADC supports policy based routing (PBR) to address the networking conflicts in such scenarios. In PBR, decisions are taken based on the criteria that you specify. Typically, a next hop is specified where you send the selected packets. In a multi-cluster Kubernetes environment, PBR is implemented by reserving a subnet IP address (SNIP) for each Kubernetes cluster or the Citrix Ingress Controller. Using net profile, the SNIP is bound to all service groups created by the same Citrix ingress controller. For all the traffic generated from service groups belonging to the same cluster, the source IP address is the same SNIP.

Following is a sample topology where PBR is configured for two Kubernetes clusters which are load balanced using a Citrix ADC VPX or MPX.

PBR configuration

Configure PBR using the Citrix ingress controller

To configure PBR, you need one SNIP or more per Kubernetes cluster. You can provide SNIP values either using the environment variable in the Citrix ingress controller deployment YAML file during bootup or using ConfigMap.

Perform the following steps to deploy the Citrix ingress controller and configure PBR using ConfigMap.

  1. Download the citrix-k8s-ingress-controller.yaml using the following command:

    wget  https://raw.githubusercontent.com/citrix/citrix-k8s-ingress-controller/master/deployment/baremetal/citrix-k8s-ingress-controller.yaml
    
  2. Edit the Citrix ingress controller YAML file:

      - Specify the values of the environment variables as per your requirements. For more information on specifying the environment variables, see the [Deploy Citrix ingress controller](https://developer-docs.citrix.com/projects/citrix-k8s-ingress-controller/en/latest/deploy/deploy-cic-yaml/) documentation.
    
  3. Deploy the Citrix ingress controller using the edited YAML file with the following command on each cluster.

    kubectl create -f citrix-k8s-ingress-controller.yaml
    
  4. Create a YAML file cic-configmap.yaml with the required SNIP values in the ConfigMap.

    Following is an example for a ConfigMap with the SNIP values:

    ```yml
    
    apiVersion: v1
    kind: ConfigMap
    metadata:
        name: pbr-test
        namespace: default
    data:
        NS_SNIPS: '["192.0.2.2", "192.0.2.1"]'
    ```
    
  5. Apply the ConfigMap.

    kubectl create -f cic-configmap.yaml

You can also specify the SNIPs using the NS_SNIPS environment variable in the Citrix ingress controller deployment YAML file.

     - name: "NS_SNIPS"
        value: `["192.0.2.2", "192.0.2.1"]`

The following are the usage guidelines while using ConfigMap for configuring SNIP:

  • Only SNIPs can be added or removed via ConfigMap. The feature-node-watch argument can only be enabled during bootup.

  • When you add a ConfigMap:

    • If SNIPs were already provided using the environment variable during bootup and you want to retain them, those SNIPs should be specified in the ConfigMap along with the new SNIPs.
  • When you delete ConfigMap:

    • All PBRs generated by ConfigMap SNIPs are deleted. If SNIPs are provided via the environment variable, PBR for those IP addresses is added.

    • If SNIPs were not provided using the NS_SNIPS environment variable, static routes are added since feature-node-watch is enabled.

Was this article helpful?