Lab environment¶
Options¶
If you brought your own Kubernetes cluster:
-
Istio version 1.23.0 officially supports Kubernetes versions 1.27 - 1.30. Feel free to consult the Istio support status of Istio releases page for more information.
-
We recommend a 3-worker node cluster of machine type "e2-standard-2" or similar, though a smaller cluster will likely work just fine.
If you have your own public cloud account:
-
On GCP, the following command should provision a GKE cluster of adequate size for the workshop:
-
Feel free to provision a K8S cluster on any infrastructure of your choosing.
Be sure to configure your kubeconfig
file to point to your cluster.
If you received Google credentials from the workshop instructors:
- A Kubernetes cluster has already been provisioned for you.
- Your instructor will demonstrate the process of accessing and configuring your environment, described below.
- The instructions below explain in detail how to access your account, select your project, and launch the cloud shell.
Log in to GCP¶
- Log in to GCP using credentials provided by your instructor.
- Agree to the terms
- You will be prompted to select your country, click "Agree and continue"
Select your project¶
Select the GCP project you have been assigned, as follows:
- Click the project selector "pulldown" menu from the top banner, which will open a popup dialog
- Make sure the Select from organization is set to tetratelabs.com
- Select the tab named All
- You will see your GCP project name (istio-0to60..) listed under the organization tetratelabs.com
- Select the project from the list
Verify that your project is selected:
- If you look in the banner now, you will see your selected project displayed.
Launch the Cloud Shell¶
The Google Cloud Shell will serve as your terminal environment for these labs.
- Click the Activate cloud shell icon (top right); the icon looks like this:
- A dialog may pop up, click Continue
- Your cloud shell terminal should appear at the bottom of the screen
- Feel free to expand the size of the cloud shell, or even open it in a separate window (locate the icon button in the terminal header, on the right)
Warning
Your connection to the Cloud Shell gets severed after a period of inactivity. Click on the Reconnect button when this happens.
Configure cluster access¶
-
Check that the
kubectl
CLI is installed -
Generate a
kubeconfig
entry- Activate the top navigation menu (Menu icon on the top left hand side of the page)
- Locate and click on the product Kubernetes Engine (you may have to scroll down until you see it)
- Your pre-provisioned 3-node Kubernetes cluster should appear in the main view
- Click on that row's "three dot" menu and select the Connect option
- A dialog prompt will appear with instructions
- Copy the
gcloud
command shown and paste it in your cloud shell
Click Authorize when prompted
The console message will state that a kubeconfig entry [was] generated for [your project]
-
Verify that your Kubernetes context is set for your cluster
-
Run a token command such as
kubectl get node
orkubectl get ns
to ensure that you can communicate with the Kubernetes API Server.
Instructions in subsequent labs assume you will be working from the Google Cloud Shell.
If you prefer to do away with having to setup your own Kubernetes environment, Killercoda offers a simple browser-based interactive environment. The Istio 0 to 60 scenarios have been ported to Killercoda and can be launched from here.
If you choose this option, please disregard this page's remaining instructions.
Yet another option is to run a Kubernetes cluster on your local machine using Minikube, Kind, or similar tooling. This option entails minimum resource (cpu and memory) requirements and you will need to ensure that ingress to loadbalancer-type services functions. Here is a recipe for creating a local Kubernetes cluster with k3d:
Tip
This workshop makes extensive use of the kubectl
CLI.
Consider configuring an alias to make typing a little easier. Here are commands to configure the "k" alias with command completion, for the bash shell:
Artifacts¶
The lab instructions reference Kubernetes yaml artifacts that you will need to apply to your cluster at specific points in time.
You have the option of copying and pasting the yaml snippets directly from the lab instructions as you encounter them.
Another option is to clone the GitHub repository for this workshop from the Cloud Shell. You will find all yaml artifacts in the subdirectory named artifacts
.
git clone https://github.com/tetratelabs/istio-0to60.git && \
mv istio-0to60/artifacts . && \
rm -rf istio-0to60
Next¶
Now that we have access to our environment and to our Kubernetes cluster, we can proceed to install Istio.