Skip to content

Traffic shifting

Version 2 of the customers service has been developed, and it's time to deploy it to production. Whereas version 1 returned a list of customer names, version 2 also includes each customer's city.

Deploying customers, v2

We wish to deploy the new service but aren't yet ready to direct traffic to it.

It would be prudent to separate the task of deploying the new service from the task of directing traffic to it.

Labels

The customers service is labeled with app=customers.

Verify this with:

kubectl get pod -Lapp,version

Note the selector on the customers service in the output to the following command:

kubectl get svc customers -o wide

If we were to just deploy v2, the selector would match both versions.

DestinationRules

We can inform Istio that two distinct subsets of the customers service exist, and we can use the version label as the discriminator.

customers-destinationrule.yaml
---
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
  name: customers
spec:
  host: customers.default.svc.cluster.local
  subsets:
    - name: v1
      labels:
        version: v1
    - name: v2
      labels:
        version: v2
  1. Apply the above destination rule to the cluster.

  2. Verify that it's been applied.

    kubectl get destinationrule
    

It's also worthwhile to invoke the istioctl x describe command on the customers service:

istioctl x describe svc customers

Notice how the output references the newly-created subsets v1 and v2.

VirtualServices

Armed with two distinct destinations, the VirtualService Custom Resource allows us to define a routing rule that sends all traffic to the v1 subset.

customers-virtualservice.yaml
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
  name: customers
spec:
  hosts:
  - customers.default.svc.cluster.local
  http:
  - route:
    - destination:
        host: customers.default.svc.cluster.local
        subset: v1

Above, note how the route specifies subset v1.

  1. Apply the virtual service to the cluster.

  2. Verify that it's been applied.

    kubectl get virtualservice 
    

We can now safely proceed to deploy v2, without having to worry about the new workload receiving traffic.

Finally deploy customers, v2

Apply the following Kubernetes deployment to the cluster.

customers-v2.yaml
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: customers-v2
  labels:
    app: customers
    version: v2
spec:
  replicas: 1
  selector:
    matchLabels:
      app: customers
      version: v2
  template:
    metadata:
      labels:
        app: customers
        version: v2
    spec:
      serviceAccountName: customers
      containers:
      - image: gcr.io/tetratelabs/customers:2.0.0
        imagePullPolicy: Always
        name: svc
        ports:
        - containerPort: 3000

Check that traffic routes strictly to v1

  1. Generate some traffic.

    while true; do curl -I http://$GATEWAY_IP/; sleep 0.5; done
    
  2. Open a separate terminal and launch the Kiali dashboard

    istioctl dashboard kiali
    

Take a look at the graph, and select the default namespace.

The graph should show all traffic going to v1.

Route to customers, v2

We wish to proceed with caution. Before customers can see version 2, we want to make sure that the service functions properly.

Expose "debug" traffic to v2

Review this proposed updated routing specification.

customers-vs-debug.yaml
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
  name: customers
spec:
  hosts:
  - customers.default.svc.cluster.local
  http:
  - match:
    - headers:
        user-agent:
          exact: debug
    route:
    - destination:
        host: customers.default.svc.cluster.local
        subset: v2
  - route:
    - destination:
        host: customers.default.svc.cluster.local
        subset: v1

We are telling Istio to check an HTTP header: if the user-agent is set to debug, route to v2, otherwise route to v1.

Open a new terminal and apply the above resource to the cluster; it will overwrite the currently defined VirtualService as both yaml files use the same resource name.

kubectl apply -f customers-vs-debug.yaml

Test it

Open a browser and visit the application.

If you need it
GATEWAY_IP=$(kubectl get svc -n istio-system istio-ingressgateway -ojsonpath='{.status.loadBalancer.ingress[0].ip}')

We can tell v1 and v2 apart in that v2 displays not only customer names but also their city (in two columns).

The user-agent header can be included in a request in a number of ways:

If you're using Chrome or Firefox, you can customize the user-agent header as follows:

  1. Open the browser's developer tools
  2. Open the "three dots" menu, and select More tools → Network conditions
  3. The network conditions panel will open
  4. Under User agent, uncheck Use browser default
  5. Select Custom... and in the text field enter debug

Refresh the page; traffic should be directed to v2.

curl -H "user-agent: debug" http://$GATEWAY_IP

Check out modheader, a convenient browser extension for modifying HTTP headers in-browser.

Tip

If you refresh the page a good dozen times and then wait ~15-30 seconds, you should see some of that v2 traffic appear in Kiali.

Canary

Well, v2 looks good; we decide to expose the new version to the public, but we're still prudent.

Start by siphoning 10% of traffic over to v2.

customers-vs-canary.yaml
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
  name: customers
spec:
  hosts:
  - customers.default.svc.cluster.local
  http:
  - route:
    - destination:
        host: customers.default.svc.cluster.local
        subset: v2
      weight: 10
    - destination:
        host: customers.default.svc.cluster.local
        subset: v1
      weight: 90

Above, note the weight field specifying 10 percent of traffic to subset v2. Kiali should now show traffic going to both v1 and v2.

  • Apply the above resource.
  • In your browser: undo the injection of the user-agent header, and refresh the page a bunch of times.

In Kiali, under the Display pulldown menu, you can turn on "Traffic Distribution", to view the relative percentage of traffic sent to each subset.

Most of the requests still go to v1, but some (10%) are directed to v2.

Check Grafana

Before we open the floodgates, we wish to determine how v2 is faring.

istioctl dashboard grafana

In Grafana, visit the Istio Workload Dashboard and specifically look at the customers v2 workload. Look at the request rate and the incoming success rate, also the latencies.

If all looks good, up the percentage from 90/10 to, say 50/50.

Watch the request volume change (you may need to click on the "refresh dashboard" button in the upper right-hand corner).

Finally, switch all traffic over to v2.

customers-virtualservice-final.yaml
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
  name: customers
spec:
  hosts:
  - customers.default.svc.cluster.local
  http:
  - route:
    - destination:
        host: customers.default.svc.cluster.local
        subset: v2

After applying the above resource, go to your browser and make sure all requests land on v2 (two-column output). Within a minute or so, the Kiali dashboard should also reflect the fact that all traffic is going to the customers v2 service.

Though it no longer receives any traffic, we decide to leave v1 running a while longer before retiring it.

Going further

Investigate Flagger, an Istio-compatible tool that can be used to automate the process of progressive delivery (aka Canary rollouts). Here is an exploration of Flagger with Istio and its bookinfo sample application.

Cleanup

After completing this lab, reset your application to its initial state:

  1. Delete the customers virtual service:

    kubectl delete virtualservice customers
    
  2. Delete the destination rule for the customers service:

    kubectl delete destinationrule customers
    
  3. Delete the customer-v2 deployment:

    kubectl delete deploy customers-v2