Skip to content

Security

In this lab we explore some of the security features of the Istio service mesh.

Mutual TLS

By default, Istio is configured such that when a service is deployed onto the mesh, it will take advantage of mutual TLS:

  • Workloads are given an identity as a function of their associated service account and namespace.
  • An x.509 certificate is issued to the workload (and regularly rotated) and used to identify the workload in calls to other services.

In the observability lab, we looked at the Kiali dashboard and noted the icons indicating that traffic was secured with mTLS.

Can a workload receive plain-text requests?

We can test whether a mesh workload, such as the customers service, will allow a plain-text request as follows:

  1. Create a separate namespace that is not configured with automatic injection.

    kubectl create ns other-ns
    
  2. Deploy sleep to that namespace

    kubectl apply -f sleep.yaml -n other-ns
    
  3. Verify that the sleep pod has no sidecars:

    kubectl get pod -n other-ns
    
  4. Call the customer service from that pod:

    kubectl exec -n other-ns deploy/sleep -- curl -s customers.default | jq
    

The output is a JSON-formatted list of customers.

We conclude that Istio is configured by default to allow plain-text request. This is called permissive mode and is specifically designed to allow services that have not yet fully on-boarded onto the mesh to participate.

Enable strict mode

Istio provides the PeerAuthentication custom resource to specify peer authentication policy.

  1. Review the following policy.

    mtls-strict.yaml

    1
    2
    3
    4
    5
    6
    7
    8
    9
    ---
    apiVersion: security.istio.io/v1beta1
    kind: PeerAuthentication
    metadata:
      name: default
      namespace: default
    spec:
      mtls:
        mode: STRICT
    

    Info

    Strict mtls can be enabled globally by setting the namespace to the name of the Istio root namespace, which by default is istio-system

  2. Apply the PeerAuthentication resource to the cluster.

    kubectl apply -f mtls-strict.yaml
    
  3. Verify that the peer authentication has been applied.

    kubectl get peerauthentication
    

Verify that plain-text requests are no longer permitted

kubectl exec -n other-ns deploy/sleep -- curl customers.default

The console output should indicate that the connection was reset by peer.

Inspecting a workload certificate

  1. Capture the certificate returned by the customers workload:

    kubectl exec deploy/sleep -c istio-proxy -- \
      openssl s_client -showcerts -connect customers:80 > cert.txt
    
  2. Inspect the certificate with:

    openssl x509 -in cert.txt -text -noout
    
  3. Review the certificate fields:

    1. The certificate validity period should be 24 hrs.
    2. The Subject Alternative Name field should contain the spiffe URI.

How do I know that traffic is mTls-encrypted?

Here is a recipe that uses the tcpdump utility to spy on traffic to a service to verify that it is indeed encrypted.

  1. Update the Istio installation with the configuration field values.proxy.privileged set to true:

    istioctl install --set values.global.proxy.privileged=true
    

    For a description of this configuration field, see the output of helm show values istio/istiod | grep privileged.

  2. Restart the customers deployment:

    kubectl rollout restart deploy customers-v1
    
  3. Grab the IP address of the customers Pod:

    IP_ADDRESS=$(kubectl get pod -l app=customers -o jsonpath='{.items[0].status.podIP}')
    
  4. Shell into the customers sidecar container:

    kubectl exec -it svc/customers -c istio-proxy -- env IP_ADDRESS=$IP_ADDRESS /bin/bash
    
  5. Start tcpdump on the port that the customers service is listening on:

    sudo tcpdump -vvvv -A -i eth0 "((dst port 3000) and (net ${IP_ADDRESS}))"
    
  6. In separate terminal make a call to the customers service:

    kubectl exec deploy/sleep -- curl customers
    

You will see encrypted text in the tcpdump output.

Security in depth

Another important layer of security is to define an authorization policy, in which we allow only specific services to communicate with other services.

At the moment, any container can, for example, call the customers service or the web-frontend service.

  1. Call the customers service.

    kubectl exec deploy/sleep -- curl -s customers | jq
    
  2. Call the web-frontend service.

    kubectl exec deploy/sleep -- curl -s web-frontend | head
    

Both calls succeed.

We wish to apply a policy in which only web-frontend is allowed to call customers, and only the ingress gateway can call web-frontend.

Study the below authorization policy.

authz-policy-customers.yaml

---
apiVersion: security.istio.io/v1beta1
kind: AuthorizationPolicy
metadata:
  name: allowed-customers-clients
  namespace: default
spec:
  selector:
    matchLabels:
        app: customers
  action: ALLOW
  rules:
  - from:
    - source:
        principals: ["cluster.local/ns/default/sa/web-frontend"]
  • The selector section specifies that the policy applies to the customers service.
  • Note how the rules have a "from: source: " section indicating who is allowed in.
  • The nomenclature for the value of the principals field comes from the spiffe standard. Note how it captures the service account name and namespace associated with the web-frontend service. This identity is associated with the x.509 certificate used by each service when making secure mtls calls to one another.

Tasks:

  • Apply the policy to your cluster.
  • Verify that you are no longer able to reach the customers pod from the sleep pod

Challenge

Can you come up with a similar authorization policy for web-frontend?

  • Use a copy of the customers authorization policy as a starting point
  • Give the resource an apt name
  • Revise the selector to match the web-frontend service
  • Revise the rule to match the principal of the ingress gateway

Hint

The ingress gateway has its own identity.

Here is a command which can help you find the name of the service account associated with its identity:

kubectl get pod -n istio-system -l app=istio-ingressgateway -o yaml | grep serviceAccountName

Use this service account name together with the namespace that the ingress gateway is running in to specify the value for the principals field.

Test it

Don't forget to verify that the policy is enforced.

  • Call both services again from the sleep pod and ensure communication is no longer allowed.
  • The console output should contain the message RBAC: access denied.

Next

In the next lab we show how to use Istio's traffic management features to upgrade the customers service with zero downtime.