Istio Ingress for Kubernetes

Istio Ingress for Kubernetes

Istio Ingress for Kubernetes provides abstractions over Kubernetes, facilitating the development of distributed microservice networks. This feature does not include Istio’s service mesh, but assists with the monitoring, routing, and managing of requests entering the microservice network from outside.

With this feature enabled, you can now expose apps and containers running inside your Kubernetes cluster to the outside world. You can route incoming requests using hostname, URL, header, and other characteristics to dictate which service receives the request. Istio Ingress can be toggled on and off in the MKE web interface or using the MKE CLI.

This section walks you through the process of setting up Istio Ingress on Kubernetes clusters.

The Istio Ingress feature can be used by administrators and by developers.

Administrator usage

Traffic in Istio is categorized as data plane traffic and control plane traffic. Data plane traffic refers to the messages that the business logic of the workloads send and receive. Control plane traffic refers to configuration and control messages sent between Istio components to program the behavior of the mesh. Traffic management in Istio refers exclusively to data plane traffic.

  1. Open the MKE web UI.

  2. Select Admin Settings.

  3. Select Ingress.

  4. Under the Kubernetes tab:

    • Click the slider to enable Ingress for Kubernetes.

    • Configure the proxy to specify how external traffic is handled by the cluster:

    • Specify the node ports for the Istio Ingress Gateway Service. External traffic can enter the cluster via these ports. Ensure that these ports are open.

    • (Optional) Select External IP to create a Layer 7 load balancer in front of multiple nodes. You can then add a list of external IP addresses to the Ingress Gateway service.

    • Configure the replicas to specify how to scale load balancing.

    • Configure placement rules and load balancer configurations.

    • Click Save.

  5. Create an Istio Gateway to expose your apps:

    • Select Kubernetes.

    • Select Ingress.

    • Under the Gateways tab, click Create.

    • Select the nodes on which this Gateway configuration will be applied.

    • Add the server details. These details describe the properties of the proxy on a given load balancer port.

    • Click Generate YML to create the configuration file.

  6. Click Skip to YAML Editor.

Developer usage

This section assumes your developers have access to deploy workloads to the cluster.

Use cases

Create an Istio Virtual Service

As an administrator, you need to create an Istio Virtual Service to route all requests matching <domain>/{status,delay} to an application. Virtual Services are namespace scoped and can only route to applications within the namespace they are created in.

To create a virtual service:

  1. Select Kubernetes.

  2. Select Ingress.

  3. Under the Virtual Services tab, click Create.

    • Enter a name for the virtual service.

    • Add the destination hosts to which traffic is being sent.

    • Add the gateways.

    • Click Generate YML to create the configuration file.

    • Click Skip to YAML Editor.

Sample YAML configuration file

apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
generation: 1
name: httpbin-vs
selfLink: /apis/networking.istio.io/v1alpha3/namespaces/default/virtualservices/httpbin-vs
spec:
gateways:
- application-gateway
hosts:
- httpbin
http:
- match:
   - uri:
      prefix: /status
   - uri:
      prefix: /delay
   route:
   - destination:
      host: httpbin
      port:
         number: 8000

Enable split testing

To enable split testing of an application, the administrator can add an Istio Destination Rule to route a small percentage of traffic to the new version of the application.

  1. Add the Istio Destination Rule in the namespace where the application is deployed. This creates the service subsets needed for the Virtual Serviceís matcher, based on Kubernetes pod labels.

    apiVersion: networking.istio.io/v1alpha3
    kind: DestinationRule
    metadata:
    name: httpbin-destination-rule
    spec:
    host: httpbin
    subsets:
    - name: 'v1.0.0'
       labels:
          version: 'v1.0.0'
    - name: 'v1.1.0-featuretest'
       labels:
          version: 'v1.1.0-featuretest'
    
  2. Edit the existing Virtual Service and add the needed routing policy, matching by destination rule.

    apiVersion: networking.istio.io/v1alpha3
    kind: VirtualService
    metadata:
    name: httpbin-vs
    spec:
    gateways:
    - application-gateway
    hosts:
    - '*'
    http:
    - route:
       - destination:
          host: httpbin
          subset: 'v1.0.0'
          weight: 95
       - destination:
          host: httpbin
          subset: 'v1.1.0-featuretest'
          weight: 5
    

Configure sticky sessions

As a developer, you may need customers participating in split testing to consistently see a particular feature. For this, the administrator must add a sticky session to the initial application configuration, which will force Istio Ingress to route all follow-up requests from the same caller to the same pod.

The administrator creates a new Destination Rule, declaring a hashing-based load balancer for the application using the user cookie as the hash key:

apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: httpbin-destination-rule
spec:
host: httpbin.default.svc.cluster.local
trafficPolicy:
   loadBalancer:
      consistentHash:
      httpCookie:
         name: User
         ttl: 0s

Canary deployments

Canary deployments are a pattern for rolling out releases to a subset of users or servers. Canary deployments are useful if a developer wants to gradually deploy a new version of an application without any downtime.

To create a canary deployment, the administrator creates a Destination Rules for both versions of the application.

  1. Add the Istio Destination Rule in the namespace where the application is deployed. This creates the service subsets needed for the Virtual Serviceís matcher, based on Kubernetes pod labels.

  2. Edit the existing Virtual Service and add the needed routing policy, matching by destination rule.

  3. In the Virtual Service configuration, specify the weight of each version.

  4. Gradually increase the weight of the new version, until it reaches 100%.

  5. Additionally, use the Kubernetes Autoscaler to make sure that there are always available instances.

Blacklisting

To prevent DoS attacks, the administrator can blacklist the offending IP addresses to prevent them from degrading the applicationís uptime. The blacklist can be created using Istioís Mixer policies.

  1. Create a handler, instance, and rule, which combined will filter out any requests from IP addresses specified in the Handlerís override spec:

    kubectl apply -f
    
  2. Use the x-forwarded-for header that Istio automatically attaches.

  3. Update the MKE config file, updating cluster_config.service_mesh. ingress_preserve_client_ip to true.

Sample configuration file

apiVersion: "config.istio.io/v1alpha2"
kind: handler
metadata:
name: blacklisthandler
namespace: istio-system
spec:
compiledAdapter: listchecker
params:
   overrides:
   - 37.72.166.13
   - <IP/CIDR TO BE BLACKLISTED>
   blacklist: true
   entryType: IP_ADDRESSES
   refresh_interval: 1s
   ttl: 1s
   caching_interval: 1s
---
apiVersion: "config.istio.io/v1alpha2"
kind: instance
metadata:
name: blacklistinstance
namespace: istio-system
spec:
compiledTemplate: listentry
params:
   value: ip(request.headers["x-forwarded-for"]) || ip("0.0.0.0")
---
apiVersion: "config.istio.io/v1alpha2"
kind: rule
metadata:
name: blaclistcidrblock
namespace: istio-system
spec:
match: (source.labels["istio"] | "") == "ingressgateway"
actions:
- handler: blacklisthandler
   instances:
   - blacklistinstance