Skip to content

Implementing AWS NLB (Bonus Module)

In previous modules the applications were published via AWS Classic LB, in this module we'll demonstrate that AWS NLB can provide the same result while NLB can be a preferred choice for infrastructure teams.

Adding NLB in parallel

Alt text

The first step of this module is to add NLB to the environment. It will serve traffic in parallel with existing AWS Classic LB.

We'll configure NLB to listen to HTTPS port 443 only. The control over the traffic flow is done via DNS - if helloworld.tetrate.io resolved to CNAME of NLB then only HTTPS requests are accepted. However if DNS CNAME record for helloworld.tetrate.io points to the Classic LB then all (HTTP and HTTPS) requests will be accepted.

IMPORTANT POINT After traffic reaches out the Istio GW the proceeding of the packets will follow the same rules (defined by Gateway and Virtual Service manifests that were created in previous modules)

Kubernetes service manifest for AWS NLB

The new service will be created that leverages the previously created VS and GW. The following service definition will trigger AWS to create NLB (if the annotations below are not specified - Classic LoadBalancer would be created). (the manifest is also available here for download)

apiVersion: v1
kind: Service
metadata:
  name: tid-nlb-ingressgateway
  namespace: tid-ingress-ns
  annotations:
    service.beta.kubernetes.io/aws-load-balancer-cross-zone-load-balancing-enabled: "true"
    service.beta.kubernetes.io/aws-load-balancer-type: nlb
spec:
  type: LoadBalancer
  selector:
    istio: tid-ingress-gw
  ports:
  - port: 443
    protocol: TCP

Service creation might take up to 15 minutes. After this the AWS Console will turn health-checks to green (similar to the example below).

Alt text

To confirm that service is functional lets confirm with curl:

$ ADDR_NLB=$(kubectl -n tid-ingress-ns get service tid-nlb-ingressgateway -o=jsonpath="{.status.loadBalancer.ingress[0]['hostname','ip']}")
$ echo $ADDR_NLB
a04f53cdcf96440949fd214f089a88ff-1b00d846c75bb8bd.elb.ap-northeast-2.amazonaws.com
$ curl -k --connect-to helloworld.tetrate.io:443:${ADDR_NLB} https://helloworld.tetrate.io/hello
Hello version: v2, instance: helloworld-v2-79bf565586-p6zph

Eliminating AWS Classic LB and plan-text HTTP

the very common situation - when application migrates to production - the security requirements are tighten (no plain HTTP is allowed), also specifics on AWS LB type are also common to be specified by infrastructure team.

Alt text

Currently traffic arriving on both AWS Classic (HTTP (port 80) and HTTPS (port 443)) and AWS NLB (HTTPS port 443) is served by helloworld application. Changes that we are going to make will allow on encrypted traffic on AWS Classic LB port 443 to be accepted.

per diagram - Virtual Service will not route traffic on port 80 which will generate 404 Not Found for any plain HTTP requests. As HTTP requests are served by helloworld-v1 green - all traffic that we know about will be not be routed to v1 microservice - it's why it shown as disconnected now.

We've confirmed NLB functionality in previous step. Lets test Classic LB before making changes:

$ curl $ADDR/hello -H "Host: helloworld.tetrate.io" # Test http on AWS Classic
Hello version: v2, instance: helloworld-v2-79bf565586-p6zph
$ curl -k --connect-to helloworld.tetrate.io:443:${ADDR} https://helloworld.tetrate.io/hello  # Test https on AWS Classic
Hello version: v1, instance: helloworld-v1-77cb56d4b4-vzpj6

Modify Virtual Service (VS)

Changing VS object by removing http(port 80) config will disable the plain text traffic. The yaml file is found here

now let's apply and re-test

$ kubectl apply -f 8_vs_https_only.yaml 
virtualservice.networking.istio.io/helloworld-service configured

$ curl -I $ADDR/hello -H "Host: helloworld.tetrate.io" # Test http on AWS Classic
HTTP/1.1 404 Not Found
date: Wed, 30 Nov 2022 20:39:42 GMT
server: istio-envoy
transfer-encoding: chunked

$ curl -k --connect-to helloworld.tetrate.io:443:${ADDR} https://helloworld.tetrate.io/hello  # Test https on AWS Classic
Hello version: v2, instance: helloworld-v2-79bf565586-p6zph

as expected port 80 doesn't have any service associated with it

Remove AWS Classic LB

Removing classic balancer is simple - deletion of tid-ingressgateway kubernetes service leads to deletion of AWS LB. FQDN for helloworld-tls should be updated to point to CNAME record of LB associated with tid-nlb-ingressgateway (remember it stored in $ADDR_NLB env variable)

kubectl -n tid-ingress-ns delete service tid-ingressgateway 

Confirm with the test that NLB is still serving the traffic.

$ ADDR_NLB=$(kubectl -n tid-ingress-ns get service tid-nlb-ingressgateway -o=jsonpath="{.status.loadBalancer.ingress[0]['hostname','ip']}")
$ echo $ADDR_NLB
a04f53cdcf96440949fd214f089a88ff-1b00d846c75bb8bd.elb.ap-northeast-2.amazonaws.com
$ curl -k --connect-to helloworld.tetrate.io:443:${ADDR_NLB} https://helloworld.tetrate.io/hello
Hello version: v2, instance: helloworld-v2-79bf565586-p6zph

The final state of your setup is reflected on the diagram below, the only HTTPS traffic is accepted by only NLB and served only by blue version of the application.

Alt text

the video is also available

Congratulation you have completed Tetrate TID Addon for AWS EKS Workshop.