Service discovery and load balancing¶
This lab is a standalone exploration of service discovery and load balancing in Istio.
Clusters and endpoints¶
The istioctl
CLI's diagnostic command proxy-status
provides a simple way to list all proxies that Istio knows about.
Run and study the output of the proxy-status
command:
Since We have not yet deployed any workloads, the output should be rather anemic, citing the lone ingress gateway that was deployed when we installed Istio in the previous lab.
Enable automatic sidecar injection¶
There are two options for sidecar injection: automatic and manual.
In this lab we will use automatic injection, which involves labeling the namespace where the pods are to reside.
-
Label the default namespace
-
Verify that the label has been applied:
Deploy the helloworld
sample¶
The Istio distribution comes with a sample application "helloworld".
Deploy helloworld
to the default namespace.
Check the output of proxy-status
again:
Confirm that the two helloworld
workloads are listed and marked as "SYNCED".
While here, let us also deploy the sample app called sleep
, that will serve the purpose of a client from which we might call the helloworld
app:
The service registry¶
Istio maintains an internal service registry which can be observed through a debug endpoint /debug/registryz
exposed by istiod
:
curl
the registry endpoint:
The output can be prettified, and filtered (to highlight the list of host names in the registry) with a tool such as jq
.
kubectl exec -n istio-system deploy/istiod -- \
curl -s localhost:15014/debug/registryz | jq .[].hostname
Confirm that the helloworld
service is listed in the output.
The sidecar configuration¶
Review the deployments in the default
namespace:
The istioctl
CLI's diagnostic command proxy-config
will help us inspect the configuration of proxies.
Envoy's term for a service is "cluster".
Confirm that sleep
knows about other services (helloworld
, mainly):
List the endpoints backing each "cluster":
Zero in on the endpoints for the helloworld
service:
istioctl proxy-config endpoints deploy/sleep \
--cluster "outbound|5000||helloworld.default.svc.cluster.local"
We learn that Istio has communicated to the sleep
workload information about both helloworld
endpoints.
Load balancing¶
The sleep
pod's container image has curl
pre-installed.
Make repeated calls to the helloworld
service from the sleep
pod:
Some responses will be from helloworld-v1
while others from helloworld-v2
, an indication that Envoy is load-balancing requests between these two endpoints.
Envoy does not use the ClusterIP service. It performs client-side load-balancing using the endpoints you resolved above.
We can examine the helloworld
"cluster" definition in a sample client to see what load balancing policy is in effect:
istioctl proxy-config cluster deploy/sleep \
--fqdn helloworld.default.svc.cluster.local -o yaml | grep lbPolicy
To influence the load balancing algorithm that Envoy uses when calling helloworld
, we can define a traffic policy, like so:
helloworld-lb.yaml | |
---|---|
Apply the above traffic policy to the cluster:
Examine the updated load-balancer policy:
istioctl proxy-config cluster deploy/sleep \
--fqdn helloworld.default.svc.cluster.local -o yaml | grep lbPolicy
Confirm that it now reads "RANDOM".
For more insight into the merits of the different load balancing options, read the blog entry Examining Load Balancing Algorithms with Envoy from the Envoy Proxy blog.
Traffic distribution¶
We can go a step further and control how much traffic to send to version v1 and how much to v2.
First, define the two subsets, v1 and v2:
helloworld-dr.yaml | |
---|---|
Apply the updated destination rule to the cluster:
If we now inspect the list of clusters, note that there's one for each subset:
With the subsets defined, we turn our attention to the routing specification. We use a VirtualService, in this case to direct 25% of traffic to v1 and 75% to v2:
Apply the VirtualService to the cluster:
Finally, we can inspect the routing rules applied to an Envoy client with our proxy-config
diagnostic command:
Note the weightedClusters
section in the routes output.
The istioctl
CLI provides a convenient command to inspect the configuration of a service:
I think you'll agree the output of the istioctl x describe
command is a little easier to parse in comparison.