February 14, 2024

Services: How Kubernetes exposes your application.

Pods are ephemeral resources, meaning you shouldn't expect individual pods to be reliable and long-lasting. Each pod gets its own IP address, so for a given deployment in your cluster, the set of pods running at any given time may differ from the set of pods running for that application a moment later. Thus, how can I ensure consistent connection among the pods in my cluster?

Before delving into the core of this text about services, it's important to understand the functioning of one of Kubernetes' fundamental elements: kube-proxy.

KubeProxy

The Kube-proxy creates a proxy for each service that needs to be accessible within the cluster. This means that when a pod needs to access a service, it sends traffic to the service's IP. By intercepting this traffic, the proxy forwards it to one of the pods implementing the service, based on load balancing rules or pod selection. When a new service is created or an endpoint is added or removed from an existing service, the Kube-proxy is notified of these changes by the API server.

Based on the information about available services and endpoints, the Kube-proxy generates Network Address Translation (NAT) rules for each service. These rules are used to map the service's IP to the IPs of the pods implementing that service. The NAT rules are then applied on the node where the Kube-proxy is running.

For example, if the pod IPs are:

  • Pod1: 10.1.1.1
  • Pod2: 10.1.1.2
  • Pod3: 10.1.1.3

And the service IP is 10.10.10.10, then the NAT rules might look something like:

  • When receiving traffic destined for 10.10.10.10, forward it to 10.1.1.1
  • When receiving traffic destined for 10.10.10.10, forward it to 10.1.1.2
  • When receiving traffic destined for 10.10.10.10, forward it to 10.1.1.3

When traffic arrives at the node where the Kube-proxy is running and is destined for the service's IP (10.10.10.10 in this example), the NAT rules redirect this traffic to one of the IPs of the pods implementing the service (e.g., 10.1.1.1).

Services

In Kubernetes, a service is an abstraction that defines a set of pods and a policy for accessing these pods. Services allow a set of pods to be accessed uniformly, regardless of how many pods are running or which cluster node they reside on.

Let's elaborate on a concrete example to better illustrate how it works.

First, let's write the deployment file:

nginx-deployment.yamlcopy
apiVersion: apps/v1kind: Deploymentmetadata:name: nginx-deploymentspec:replicas: 3selector:  matchLabels:    app: nginxtemplate:  metadata:    labels:      app: nginx  spec:    containers:    - name: nginx      image: nginx:latest      ports:      - containerPort: 80

Apply:

apply.shcopy
kubectl apply -f nginx-deployment.yaml

And now let's write the service:

nginx-service.yamlcopy
apiVersion: v1kind: Servicemetadata:name: nginx-servicespec:selector:  app: nginxports:  - protocol: TCP    port: 80    targetPort: 80

Apply:

apply.shcopy
kubectl apply -f nginx-service.yaml

To test, let's deploy a pod outside of the cluster:

proxy.shcopy
kubectl proxy --port=8080

Finally, we can run a test using the curl command:

curl.shcopy
curl http://localhost:8080/api/v1/namespaces/default/services/nginx-service/proxy

If everything went well so far, you should receive the basic body of an nginx server. Note that the service file selects pods based on their app:nginx labels and forwards traffic to their port 80. As mentioned before.

Endpoint Slice

Previously, service IPs were represented by Endpoint objects, which provided a single list of all available IPs and ports. However, in large clusters, this approach can result in overhead and management difficulties.

EndpointSlice addresses this issue by dividing service IPs into multiple slices, allowing for a more uniform and efficient distribution of resources. Each slice contains a subset of the service's IPs and ports, making it easier to handle large amounts of information and improving cluster performance.

Additionally, Service objects now reference EndpointSlice objects instead of directly listing the IPs of Endpoints, allowing Kubernetes to better manage connectivity information and support more efficient operations.

Let's propose an example:

Suppose you have an application consisting of multiple pods distributed across different nodes in your Kubernetes cluster. These pods provide a MySQL database service. Instead of using selectors to route traffic to these pods, you want to manually configure the IPs and ports of the pods in the service, and you choose to use EndpointSlices for this purpose.

mysql-endpoint-slice.yamlcopy
apiVersion: discovery.k8s.io/v1kind: EndpointSlicemetadata:name: mysql-endpointsliceaddressType: IPv4ports:- name: mysqlprotocol: TCPport: 3306endpoints:- addresses:- 192.168.1.1- 192.168.1.2- 192.168.1.3ports:  mysql: {}

Instead of using selectors to direct traffic, you manually specify the EndpointSlice that you created earlier as the service target.

mysql-service.yamlcopy
apiVersion: v1kind: Servicemetadata:name: mysql-servicespec:ports:- name: mysql  port: 3306  protocol: TCPendpointSlices:- name: mysql-endpointslice

You can also employ EndpointSlice to target your service to a service in a distinct cluster.

ClusterIP

Kubernetes offers several types of services, each with their respective characteristics and behaviors.

ClusterIP is a type of service used to export a set of pods as a single internal access point within the cluster. Specifically, ClusterIP assigns a specific virtual IP address to the service, which forwards traffic to the selected pods. This type of service is suitable for internal communication between application components within the Kubernetes cluster. Traffic sent to the ClusterIP IP address is balanced between corresponding pods transparently.

NodePort

When a service is exposed using NodePort, each node in the cluster opens the same specified port, directing traffic received on that port to the corresponding pods.

For example, if you assign port number 3000 to a service using NodePort, each node in the Kubernetes cluster will listen on port 3000. When traffic arrives at any node in the cluster on this port, Kubernetes forwards it to one of the pods that the service is exposing.

NodePort is useful when you want to access your Kubernetes applications from outside the cluster, typically for development, testing, or exposing services to clients or end-users. However, in production environments, it's often recommended to use an ingress controller instead of NodePort to route incoming traffic to services, as it offers more flexibility and advanced features such as URL-based routing and TLS support.

LoadBalancer

When a service is exposed as a LoadBalancer, Kubernetes integrates with the underlying cloud provider to automatically provision an external load balancer, such as an application-layer load balancer or a network-layer load balancer.

LoadBalancer directs inbound traffic to the Kubernetes pods that are part of the service, ensuring that load is distributed efficiently and that application instances are available and accessible to external users.

As a result, your application is accessible from a single public entry point, provided by the IP address or DNS name assigned by the cloud provider to the external load balancer.

ExternalName

Instead of exposing a set of pods like traditional services, ExternalName services provide a mapping to an external hostname.

The primary purpose of ExternalName services is to allow applications within the Kubernetes cluster to communicate with external services, such as databases hosted outside the cluster, third-party services, or external API endpoints, using a DNS name. This simplifies communication, making it independent of external IP address changes.

Consider an example:

external-service.yamlcopy
apiVersion: v1kind: Servicemetadata:name: external-servicespec:type: ExternalNameexternalName: domain.com

When an application within the Kubernetes cluster makes a request to the service named 'external-service', Kubernetes' internal DNS resolves this name to the specified external name. The final DNS resolution occurs outside the Kubernetes cluster, typically using the DNS of the environment where the cluster is deployed.

Question: What type of Kubernetes service exposes an application on a static IP address within the cluster?
  • 1 - NodePort
  • 2 - ClusterIP
  • 3 - LoadBalancer
  • 4 - ExternalName

Conclusion

Kubernetes services play a vital role in facilitating communication between components, both internal and external. While I have covered some of the functionality offered by this feature, the opportunities are vast, especially when combined with other tools available in the Kubernetes arsenal. I hope this introduction clarifies and provides a solid starting point for exploring the world of networking in Kubernetes.

Time is a precious commodity, and I appreciate you generously sharing a portion of yours with me.

For more thoughts like thisJoin the community
instagraminstagraminstagraminstagram