Istio WasmPlugin¶
We'll create a WasmPlugin resource in this lab and deploy it to the Kubernetes cluster.
The WasmPlugin allows us to select the workloads we want to apply the Wasm module to and point to the Wasm module.
What about EnvoyFilter?
In previous Istio versions, we'd have to use the EnvoyFilter resource to configure the Wasm plugins. We could either point to a local Wasm file (i.e., file accessible by the Istio proxy) or a remote location. Using the remote location (e.g. http://some-storage-account/main.wasm
), the Istio proxy would download the Wasm file and cache it in the volume accessible to the proxy.
The WasmPlugin resource includes a feature that enables Istio proxy (or istio-agent to be precise) to download the Wasm file from an OCI-compliant registry. That means we can treat the Wasm files just like we treat Docker images. We can push them to a registry, version them using tags, and reference them from the WasmPlugin resource.
There was no need to push or publish the main.wasm
file anywhere in the previous labs, as it was accessible by the Envoy proxy because everything was running locally. However, now that we want to run the Wasm module in Envoy proxies that are part of the Istio service mesh, we need to make the main.wasm
file available to all those proxies they can load and run it.
Building the Wasm image¶
Since we'll be building and pushing the Wasm file, we'll need a very minimal Dockerfile in the project:
This Docker file copies the main.wasm
file to the container as plugin.wasm
. Save the above contents to Dockerfile
.
Next, we can build and push the Docker image:
Creating WasmPlugin resource¶
We can now create the WasmPlugin resource that tells Envoy where to download the extension from and which workloads to apply it to (we'll use httpbin
workload we'll deploy next).
WasmPlugin resource
You should update the REPOSITORY
value in the url
field before saving the above YAML to plugin.yaml
and deploying it using kubectl apply -f plugin.yaml
.
We'll deploy a sample workload to try out the Wasm extension. We'll use httpbin
. Make sure the default
namespace is labeled for Istio sidecar injection (kubectl label ns default istio-injection=enabled
) and then deploy the httpbin
:
httpbin.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: httpbin
---
apiVersion: v1
kind: Service
metadata:
name: httpbin
labels:
app: httpbin
service: httpbin
spec:
ports:
- name: http
port: 8000
targetPort: 80
selector:
app: httpbin
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: httpbin
spec:
replicas: 1
selector:
matchLabels:
app: httpbin
version: v1
template:
metadata:
labels:
app: httpbin
version: v1
spec:
serviceAccountName: httpbin
containers:
- image: docker.io/kennethreitz/httpbin
imagePullPolicy: IfNotPresent
name: httpbin
ports:
- containerPort: 80
Save the above YAML to httpbin.yaml
and deploy it using kubectl apply -f httpbin.yaml
.
Before continuing, check that the httpbin
Pod is up and running:
To see if something went wrong with downloading the Wasm module, you can look at the proxy logs.
Let's try out the deployed Wasm module!
We will create a single Pod inside the cluster, and from there, we will send a request to http://httpbin:8000/get
and include the hello
header.
Defaulted container "curl" out of: curl, istio-proxy, istio-init (init)
If you don't see a command prompt, try pressing enter.
/ $
Once you get the prompt to the curl container, send a request to the httpbin
service:
> GET /headers HTTP/1.1
> User-Agent: curl/7.35.0
> Host: httpbin:8000
> Accept: */*
>
< HTTP/1.1 200 OK
< server: envoy
< date: Mon, 22 Jun 2021 18:52:17 GMT
< content-type: application/json
< content-length: 525
< access-control-allow-origin: *
< access-control-allow-credentials: true
< x-envoy-upstream-service-time: 3
...
If we exit the pod and look at the stats, we'll notice that the hello_header_counter
has increased:
kubectl exec -it [httpbin-pod-name] -c istio-proxy -- curl localhost:15000/stats/prometheus | grep hello
Cleanup¶
To delete all resources created during this lab, run the following: