Overview |
---|
1. Setting up Minikube and Istio |
2. Installing Bookinfo |
3. Observability |
4. Traffic Management 1 |
5. Traffic Management 2 |
APPENDIX - Important commands |
We all know that a microservice architecture is possibly the most perfect fit for cloud native applications and it increases the speed of delivery greatly. But envision you have many microservices that are delivered by multiple teams. How do you observe the overall platform and each of the services to find out exactly what is going on with each of the services? When something goes wrong, how do you know which service or which communication among the services is causing the problem?
Istio integrates very well with a range of (open-source) telemetry and observability tools that can provide broad and granular insight into the health of all services. Istio’s role as a service mesh makes it the ideal data source for observability information, particularily in a microservices environment.
As requests pass through multiple services, identifying performance bottlenecks becomes increasingly difficult using traditional debugging techniques. Distributed tracing using e.g. Jaeger provides a holistic view of requests transiting through multiple services, allowing for immediate identification of latency issues. With Istio, distributed tracing comes by default. This will expose latency, retry, and failure information for each hop in a request.
In Exercise 1, you have installed 4 telemetry or observability add-ons:
There is a whole section on Observability in the Istio documentation.
Look at the installed Telemetry services:
kubectl get svc -n istio-system
You can see that none of the Telemetry services is of type LoadBalancer or NodePort which means there is no simple way to access them from outside the Kubernetes cluster. In the following examples we will use Kubernetes port-forwarding to compensate for this.
But first start to generate some load on your Bookinfo instance. In yet another new session enter the following commands:
export INGRESS_HOST=$(kubectl -n istio-system get service istio-ingressgateway -o jsonpath='{.status.loadBalancer.ingress[0].ip}')
export INGRESS_PORT=$(kubectl -n istio-system get service istio-ingressgateway -o jsonpath='{.spec.ports[?(@.name=="http2")].port}')
watch curl http://$INGRESS_HOST:$INGRESS_PORT/productpage
Keep this running during this exercise.
To start the Port Forwarding issue the following command in a seperate session (and keep it running):
kubectl port-forward service/tracing 8008:80 -n istio-system
Click on ‘Find Traces’.
Click on one of the trace entries to see the details:
On the left side you can see how the request is passed through the different services: coming in via Istio Ingress, Productpage to Details, back to Productpage, Productpage to Reviews, Reviews to Ratings. An the graph shows how much time is being spent in each service.
When you are finished with Jaeger, terminate the Port Forwarding (Linux: Ctl+C, Mac: Cmd+C)
To start the Port Forwarding issue the following command in a seperate session (and keep it running):
kubectl port-forward service/prometheus 9090:9090 -n istio-system
Select ‘istio_requests_total’, the click ‘Execute’ and select the ‘Graph’ tab.
This is simply a cumulative graph of all requests:
When you are finished with Prometheus, terminate the Port Forwarding (Linux: Ctl+C, Mac: Cmd+C)
To start the Port Forwarding issue the following command in a seperate session (and keep it running):
kubectl port-forward service/grafana 3000:3000 -n istio-system
Click on the “Hamburger”menu” (1), then on “Dashboard” (2)
From the list of dashboards select the “Istio Performance Dashboard”
This is general performance information collected from the service mesh:
Note: In some of the dashboards you can see a “datasource” selector. There is a default data source selected. When you click on the pulldown you can see that there is one (1) datasource available and that is Prometheus. This means that Grafana needs the Prometheus data to display its dashboards.
When you are finished with Grafana, terminate the Port Forwarding (Linux: Ctl+C, Mac: Cmd+C)
To start the Port Forwarding issue the following command in a seperate session (and keep it running):
kubectl port-forward service/kiali 20001:20001 -n istio-system
Click on the ‘Graph’ tab (1), select the ‘default’ namespace (2), select ‘Versioned app graph’ (3), and in the ‘Display’ pulldown, select ‘Traffic Distribution’ (4):
You can now see a graphical representation of your micro services including the distribution of requests amongst your services.
Watch the distribution of requests amongst the 3 versions the Reviews service: 1/3 = 33.3 % go to each of the versions: equal distribution or “round robin”. Note: Kiali needs to be active for a while to show a somewhat equal distribution.
Also note that only v2 and v3 are making requests to the Ratings service.
Click on the ‘Istio Config’ tab:
Here you can see the Istio specific configuration applied to your microservices. In Exercise 2, step ‘2 Allow external access to application’ you deployed a configuration from file ‘samples/bookinfo/networking/bookinfo-gateway.yaml’. This YAML contains the specifications for the Gateway and VirtualService. We will look at them more closely in the next exercise.
You can keep Kiali and the corresponding Port Forward session open since we will use it frequently.