Skip to content

Lab environment

Options

If you brought your own Kubernetes cluster:

  • Istio version 1.23.0 officially supports Kubernetes versions 1.27 - 1.30. Feel free to consult the Istio support status of Istio releases page for more information.

  • We recommend a 3-worker node cluster of machine type "e2-standard-2" or similar, though a smaller cluster will likely work just fine.

If you have your own public cloud account:

  • On GCP, the following command should provision a GKE cluster of adequate size for the workshop:

    gcloud container clusters create my-istio-cluster \
      --cluster-version latest \
      --machine-type "e2-standard-2" \
      --num-nodes "3" \
      --network "default"
    
  • Feel free to provision a K8S cluster on any infrastructure of your choosing.

Be sure to configure your kubeconfig file to point to your cluster.

If you received Google credentials from the workshop instructors:

  • A Kubernetes cluster has already been provisioned for you.
  • Your instructor will demonstrate the process of accessing and configuring your environment, described below.
  • The instructions below explain in detail how to access your account, select your project, and launch the cloud shell.

Log in to GCP

  1. Log in to GCP using credentials provided by your instructor.
  2. Agree to the terms
  3. You will be prompted to select your country, click "Agree and continue"

Select your project

Select the GCP project you have been assigned, as follows:

  1. Click the project selector "pulldown" menu from the top banner, which will open a popup dialog
  2. Make sure the Select from organization is set to tetratelabs.com
  3. Select the tab named All
  4. You will see your GCP project name (istio-0to60..) listed under the organization tetratelabs.com
  5. Select the project from the list

Verify that your project is selected:

  • If you look in the banner now, you will see your selected project displayed.

Launch the Cloud Shell

The Google Cloud Shell will serve as your terminal environment for these labs.

  • Click the Activate cloud shell icon (top right); the icon looks like this:
  • A dialog may pop up, click Continue
  • Your cloud shell terminal should appear at the bottom of the screen
  • Feel free to expand the size of the cloud shell, or even open it in a separate window (locate the icon button in the terminal header, on the right)

Warning

Your connection to the Cloud Shell gets severed after a period of inactivity. Click on the Reconnect button when this happens.

Configure cluster access

  1. Check that the kubectl CLI is installed

    kubectl version --short
    
  2. Generate a kubeconfig entry

    1. Activate the top navigation menu (Menu icon on the top left hand side of the page)
    2. Locate and click on the product Kubernetes Engine (you may have to scroll down until you see it)
    3. Your pre-provisioned 3-node Kubernetes cluster should appear in the main view
    4. Click on that row's "three dot" menu and select the Connect option
    5. A dialog prompt will appear with instructions
    6. Copy the gcloud command shown and paste it in your cloud shell
    gcloud container clusters get-credentials \
        $(gcloud container clusters list --format="value(name)") \
        --zone $(gcloud container clusters list --format="value(location)") \
        --project $(gcloud config get-value project)
    

    Click Authorize when prompted

    The console message will state that a kubeconfig entry [was] generated for [your project]

  3. Verify that your Kubernetes context is set for your cluster

    kubectl config get-contexts
    
  4. Run a token command such as kubectl get node or kubectl get ns to ensure that you can communicate with the Kubernetes API Server.

    kubectl get ns
    

Instructions in subsequent labs assume you will be working from the Google Cloud Shell.

If you prefer to do away with having to setup your own Kubernetes environment, Killercoda offers a simple browser-based interactive environment. The Istio 0 to 60 scenarios have been ported to Killercoda and can be launched from here.

If you choose this option, please disregard this page's remaining instructions.

Yet another option is to run a Kubernetes cluster on your local machine using Minikube, Kind, or similar tooling. This option entails minimum resource (cpu and memory) requirements and you will need to ensure that ingress to loadbalancer-type services functions. Here is a recipe for creating a local Kubernetes cluster with k3d:

k3d cluster create my-istio-cluster \
    --api-port 6443 \
    --k3s-arg "--disable=traefik@server:0" \
    --port 80:80@loadbalancer \
    --port 443:443@loadbalancer

Tip

This workshop makes extensive use of the kubectl CLI.

Consider configuring an alias to make typing a little easier. Here are commands to configure the "k" alias with command completion, for the bash shell:

cat << EOF >> ~/.bashrc

source <(kubectl completion bash)
alias k=kubectl
complete -F __start_kubectl k

EOF

source ~/.bashrc

Artifacts

The lab instructions reference Kubernetes yaml artifacts that you will need to apply to your cluster at specific points in time.

You have the option of copying and pasting the yaml snippets directly from the lab instructions as you encounter them.

Another option is to clone the GitHub repository for this workshop from the Cloud Shell. You will find all yaml artifacts in the subdirectory named artifacts.

git clone https://github.com/tetratelabs/istio-0to60.git && \
  mv istio-0to60/artifacts . && \
  rm -rf istio-0to60

Next

Now that we have access to our environment and to our Kubernetes cluster, we can proceed to install Istio.