Guide on Nginx Ingress Path-based routing
Guide on Nginx Ingress Path-based routing
Nginx Ingress? Go through the below article to get your doubts cleared.
Kubernetes supports a high-level abstraction called Ingress, which allows simple path-based, host- or URL-based HTTP routing. An Ingress is a core concept (in beta) of Kubernetes. It is always implemented by a third-party proxy; these implementations are known as Ingress controllers.
Kubernetes Ingress is an API object that provides a collection of routing rules that govern how external/internal users access Kubernetes services running in a cluster.
An Ingress controller is responsible for reading the ingress resource information and processing that data accordingly. it's a DaemonSet or Deployment, deployed as a Kubernetes Pod, that watches the endpoint of the API server for updates to the Ingress resource.
Some of the most popular Ingress Controllers for Kubernetes, namely:
Exposing your application on Kubernetes nginx ingress
In Kubernetes, there are several different ways to expose your application; using Ingress to expose your service is one way of doing it. Ingress is not a service type, but it acts as the entry point for your cluster. It lets you consolidate your routing rules into a single resource, as it can expose multiple services under the same IP address.
This post will explain how to use an ingress resource with Nginx-ingress Controller and front it with an NLB (Network Load Balancer).
Getting external traffic into Kubernetes ClusterIp, NodePort, LoadBalancer, and Ingress
When you start using Kubernetes for real-world applications, one of the first things to consider is how to route external traffic into your cluster. There are several ways to do this:
- Using the Kubernetes proxy and ClusterIP: The default ServiceType in Kubernetes is ClusterIP, which exposes a Service on a cluster-internal IP. To reach the ClusterIP from an external source, you can open a Kubernetes proxy between the external source and the cluster. However, this is usually only used for development purposes.
- Exposing services as NodePort: If you declare a Service as NodePort, it will be exposed on each Node's IP at a static port (NodePort). You can then access the Service from outside the cluster by requesting <NodeIp>:<NodePort>. This can also be used for production, but with some limitations.
- Exposing services as LoadBalancer: If you declare a Service as LoadBalancer, it will be exposed externally using a cloud provider's load balancer solution. The cloud provider will provision a load balancer for the Service and map it to its automatically assigned NodePort. This is the most commonly used method.d in production environments.
Why do I need a load balancer in front of a Nginx ingress?
Ingress is tightly integrated into Kubernetes, meaning that your existing workflows around kubectl will likely extend nicely to managing ingress. An Ingress controller does not typically eliminate the need for an external load balancer, it simply adds an additional layer of routing and control behind the load balancer.
Pods and nodes are not guaranteed to live for the whole life that the user intends: pods are ephemeral and vulnerable to kill signals from Kubernetes during occasions such as:
- Scaling.
- Memory or CPU saturation.
- Rescheduling for more efficient resource use.
- Downtime due to outside factors.
Load Balancer (Kubernetes service) is a construct that stands as a single, fixed-service endpoint for a given set of pods or worker nodes. To take advantage of the previously-discussed benefits of a Network Load Balancer (NLB), we create a Kubernetes service type:loadbalancer with the NLB annotations, and this load balancer sits in front of the ingress controller which is itself a pod or a set of pods. In AWS, for a set of EC2 compute instances managed by an Autoscaling Group, there should be a load balancer that acts as both a fixed referable address and a load balancing mechanism.
Ingress with load balancer
To start with, create an Internal facing NLB
# git clone https://github.com/shanki84/nginx-ingress.git
Create Ingress Controller
We use the kubectl apply command. It creates all resources defined in the given file.
# kubectl apply -f nginx-ingress-controller.yaml
The first command to execute automatically installs all components required on the K8s cluster:
namespace/ingress-nginx created configmap/nginx-configuration created configmap/tcp-services created configmap/udp-services created serviceaccount/nginx-ingress-serviceaccount created clusterrole.rbac.authorization.k8s.io/nginx-ingress-clusterrole created role.rbac.authorization.k8s.io/nginx-ingress-role created rolebinding.rbac.authorization.k8s.io/nginx-ingress-role-nisa-binding created clusterrolebinding.rbac.authorization.k8s.io/nginx-ingress-clusterrole-nisa-binding created
deployment.apps/nginx-ingress-controller created
Create Static LoadBalancer
By default, Kubernetes is configured to expose NodePort services on the port range 30000 - 32767. But this port range can be configured, allowing us to use the port 80 for our Ingress Controller.
AWS / Azure / GKE
AWS
Network Load Balancer
# kubectl create -f aws-nlb-service.yaml -n ingress-nginx ingress-nginx ingress-nginx- LoadBalancer 172.30.188.11 a##########7d11e9b47702ef02f8e6f-7##########d33f2.elb.eu-west-1.amazonaws.com 80:36788/TCP,443:30781/TCP 31s
NLB created:
a####393ff57d11e9b47702ef02f8e6f-7c4#######5d33f2.elb.eu-west-1.amazonaws.com
AZURE / GCE-GKE
# kubectl create -f generic-lb-service -n ingress-nginx
LB service for Azure / GCE/GKE created:
a####393ff57d11e9b47702ef02f8e6f-7c4#######5d33f2.elb.eu-west-1.amazonaws.com
Our Ingress Controller is now available on port 80 for HTTP and 443 for HTTPS:
Deploy Microservices
"Apple "& "Samsung" are the 2 microservices deployed under namespace "demoapp", if not it will be deployed on the default namespace.
# kubectl create -f apple-app.yaml -n demoapp# kubectl create -f samsung-app.yaml -n demoapp
Apple & Samsung exposes its service over NodePort.
Create Ingress-Resources
Create an Ingress-Resource, which has rules to perform path-based routing.
# kubectl create -f ingress-resources.yaml -n demoapp
Validate Ingress-Resources rules by:
# kubectl describe ing -n demoapp
we can also have microservices on any other namespace and Ingress-resources on the same namespace. Ingress controller can read Ingress-Resources using Annotation.
Ingress-Resources holds an annotation: nginx
annotations:
kubernetes.io/ingress.class: "nginx"
Ingress-Controller holds election-id as "nginx"
# Defaults to "-" # Here: "-" # This has to be adapted if you change either parameter # when launching the nginx-ingress-controller. - "ingress-controller-leader-nginx"
Can I reuse a NLB with services running in different namespaces? In the same namespace?
Install the NGINX ingress controller as explained above. In each of your namespaces, define an Ingress Resource with annotation:nginx , Ingress-Controller can read this Ingress-Resources.
validate the deployed microservices:
http://a####393ff57d11e9b47702ef02f8e6f-7c4#######5d33f2.elb.eu-west-1.amazonaws.com/apple http://a####393ff57d11e9b47702ef02f8e6f-7c4#######5d33f2.elb.eu-west-1.amazonaws.com/samsung
If you get 443 Error, then re-execute the config file Nginx-ingress-controller. it has RBAC for which gives permissions to Nginx to read the newly added microservices and its services.
# kubectl apply -f nginx-ingress-controller.yaml -n ingress-nginx
Cleanup
Delete the Ingress resource:
# kubectl delete -f https://github.com/shanki84/nginx-ingress/blob/master/ingress-resources.yaml -n demoapp
Delete the services:
# kubectl delete -f apple-app.yaml -n demoapp # kubectl delete -f samsung-app.yaml -n demoapp
Delete the NLB:
# kubectl delete -f aws-nlb-service.yaml -n ingress-nginx
Delete the NGINX ingress controller:
# kubectl delete -f nginx-ingress-controller.yaml -n ingress-nginx
We hope this post was useful! Please let us know in the comments.
Please read about other blogs on Data Science here.
Blog Written by:
Thiviya Shankar Girija Shankar
Architect Delivery Specialist, Accenture UK
- Zep Analytics
- May, 31 2023