Azure Kubernetes Service (AKS) is a fully managed container orchestration platform. You can use it to simplify the deployment, scaling, and management of containerized applications in the Azure cloud infrastructure. AKS takes care of the underlying infrastructure and cluster management tasks so you can focus on building and running cloud-native applications.
AKS provides a fully managed Kubernetes environment, eliminating the need for manual cluster setup, maintenance, and upgrades. The environment includes a scalable and highly available networking infrastructure that supports Kubernetes network policies. Network policies provide a mechanism to manage the flow of traffic between pods. This ensures application security, optimizes network resource use, and allows for network segmentation at the pod level.
This article explores the essentials of AKS networking, providing real-world use cases that demonstrate how to effectively configure network policies to establish robust network security for pods in AKS clusters.
AKS network infrastructure enables seamless communication within Kubernetes clusters, allowing pods to interact with each other, access external services, and use various resources. This lets you distribute your application workloads efficiently across the cloud infrastructure, improving performance and availability for your users.
AKS networking infrastructure uses overlay networks to create a virtual network on top of the physical network. It enables pods to communicate with each other as if they were on the same physical network—even if they’re running on different nodes or regions.
Network plugins in AKS are responsible for implementing and managing the overlay network. These plugins interface with the Container Networking Interface (CNI), which handles network functions in each node, to configure and manage network connectivity for pods within the cluster.
AKS supports the Kubenet and Azure Container Network Interface (CNI) primary network plugins.
Kubenet is the default networking plugin in AKS. It’s a simple and lightweight plugin suitable for small to medium-sized clusters with basic networking requirements.
Fig. 1: AKS Kubenet architectureFig. 1: AKS Kubenet architecture
In the Kubenet configuration, each node in the AKS cluster gets a private IP address, and pods on that node share this address. The pods communicate directly with each other using these internal IPs. While this approach is simple, it can lead to limitations in scenarios requiring more advanced networking features.
Azure CNI is an advanced networking plugin that provides more flexibility and control over network configuration. It’s suitable for large-scale clusters with complex networking requirements, such as multi-tenancy, network segmentation, and integration with Azure services.
Fig. 2: AKS CNI architectureFig. 2: AKS CNI architecture
In the Azure CNI setup, each pod obtains a unique IP address. This allows for more granular control over network policies, enabling improved network segmentation and enhanced security measures.
Other key components include:
Pods are the basic unit of deployment and management for containerized applications. Each pod contains one or more containers and shared resources such as CPU, memory, and storage. They isolate applications from other workloads running on the same node to ensure they run independently and securely.
AKS supports several options for communication between pods and other resources within the cluster. By default, pods can communicate with each other within the cluster. Kubernetes’ networking components, such as the kube-proxy service, manage network rules and routes and allow pods to interact with each other and other resources in the cluster.
Network policies enable you to define granular network rules that manage the traffic between pods. This provides micro-segmentation for pods, similar to how Network Security Groups (NSGs) provide micro-segmentation for virtual machines (VMs). The policies are essential for implementing security best practices, such as zero-trust network setups. They also enable advanced network configurations such as network segmentation, load balancing, and service discovery.
Before exploring how to configure network policies in AKS, you’ll create an AKS cluster. Once you set up the AKS cluster, you can start creating and managing network policies to secure and control pod communication in the cluster. To follow along, ensure you have an Azure subscription.
On the Azure portal overview page, click Create a resource.
In the Categories section, click Containers > Azure Kubernetes Service (AKS).
In the Basics tab, specify the following cluster details:
Click Review + create, review the cluster details, and click Create.
To connect to and access your AKS cluster, navigate to the Azure portal and select the resource group containing your AKS cluster. Click the name of your AKS cluster to open its overview page.
On the overview page, click Connect. Then, in the left pane, click Open Cloud Shell.
Select your preferred shell (Bash or PowerShell) and set the environment.
Now, run the command below to retrieve the credentials for your AKS cluster and configure the local kubectl configuration file (~/.kube/config) to connect to the cluster. This allows you to manage the cluster and its resources using the kubectl terminal session.
az aks get-credentials --resource-group <your-resource-group-name> --name <your-aks-cluster-name>
Run the following command to set up your credentials and connect to the AKS cluster:
kubectl config use-context <your-aks-cluster>
Finally, run this command to view the number of active nodes in the cluster:
kubectl get nodes
You should see a list of active nodes in your cluster. The agentpool node is the default node added to the cluster when you create a cluster using the Azure portal. You can add additional nodes when creating the cluster or scale the cluster up or down as needed.
Fig. 3: Nodes in an AKS clusterYou have successfully connected to your cluster. Now you can manage your AKS cluster and its resources.
Before you can implement network policies in your AKS cluster, you must create a namespace. A namespace is a logical division of a Kubernetes cluster that allows you to organize and manage resources. You use this namespace to host and run pods and deploy containerized applications.
Create a namespace using the command below:
kubectl create namespace <name-space>
To test network policies in the AKS cluster, deploy a simple application using the tutum/hello-world image. This application helps verify that network policies are working as expected.
Run the following commands to deploy two pods in your namespace:
kubectl run <your pod1-name> --image=tutum/hello-world --namespace=<name-space> --labels=app=hello-world,role=backend --expose --port=80
kubectl run <your pod2-name> --image=tutum/hello-world --namespace=<name-space> --labels=app=hello-world,role=backend --expose --port=80
Run the command below to check the pods that are running:
kubectl get pods -n <name-space>
You should see two pods running:
Fig. 4: Number of active pods in namespaceThe next section explores how to create and apply network policies to manage network traffic.
AKS clusters allow pods to communicate with each other without any restrictions by default. This means that any pod can send traffic to any other pod, regardless of whether or not the receiving pod is expecting or prepared to receive that traffic.
However, this open communication can pose security risks and make it challenging to implement secure access controls. For instance, malicious pods or compromised containers could exploit this unrestricted communication to interact with other pods, potentially gaining access to sensitive data or disrupting critical services.
Network policies are a Kubernetes feature available in AKS that enables the precise management and control of traffic flow between pods within clusters. They work by defining the network communication rules based on the required criteria that include attributes such as assigned labels, namespace, and IP addresses. The purpose is to dictate which pods can communicate with each other and to specify the allowed ports or protocols for such communication. Network policies provide an additional layer of security and isolation in the cluster.
Beyond security benefits, they have the potential to optimize cluster performance by minimizing unnecessary network traffic. By restricting communication between pods, network policies contribute to efficient network resource use, support faster operations, and reduce latency.
AKS supports two network policy providers:
Regardless of the provider, the basic structure of a network policy includes the following sections:
Pod selectors define which pods are affected by a network policy. These selectors consist of key-value pairs assigned to pods as labels.
Here is an example of a pod selector that would target all pods that have the label app: test.
podSelector:
matchLabels:
app: test
Ingress and egress rules specify the permitted network traffic flow to and from the selected pods. Ingress rules determine the allowed inbound traffic that can enter the pods, while egress rules define the allowed outbound traffic that can leave the pods.
Typically, ingress and egress rules comprise the following fields:
Here’s an example of an ingress rule that allows traffic from all pods in the default namespace to reach pods with the label app: test on port 80:
ingress:
- from:
- namespaceSelector:
matchLabels:
default: ""
- to:
- podSelector:
matchLabels:
app: test
ports:
- port: 80
In addition to the basic examples highlighted above, you can use network policies for a variety of more advanced scenarios, including:
When setting up an AKS cluster using the Azure command line interface (CLI), you must define the network plugin and the specific network policy provider. By default, the Azure portal configures Kubenet as the network plugin and Calico as the network policy for the cluster.
Regardless of the chosen plugin and policy, the implementation is the same. You must define a set of rules that govern pod communication in your cluster. This involves creating a manifest YAML file that includes the pod selector, egress, and ingress rules.
A basic example is implementing a policy to restrict communication between the two pods you created initially in your namespace.
To do so, create a new manifest file and name it, network-policy.yaml in the Azure Shell terminal:
touch network-policy.yaml
Open the file in the Nano text editor:
nano network-policy.yaml
Then, include the following content:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: restrict-all-traffic
namespace: <name-space>
spec:
podSelector:
matchLabels:
app: hello-world
role: backend
ingress: []
Don’t forget to change the namespace attribute to your corresponding cluster namespace.
In this example, the policy restricts communication between the pods labeled with app: hello-world and role: backend. You can specify different labels as needed.
Ideally, if an attempt is made to establish a connection from, for instance, pod1, it should fail to connect to pod2. The objective is to disallow any inbound traffic between these pods.
Save your changes and exit the Nano text editor by pressing Ctrl + X, then selecting option Y.
To apply the network policy to your AKS cluster, run the command below:
kubectl apply -f network-policy.yaml
To verify that the network policy has been applied to the cluster, first, check the assigned IP addresses of the two pods:
kubectl get pods -n <name-space> -o wide
To test the network policy and verify that it is effectively restricting traffic between the pods, run the following command:
kubectl exec -it <pod1-name> --namespace=<name-space> -- wget -qO- http://<pod2-IP>
You should now see a timeout response logged in the terminal.
Fig. 5: Basic network policy example connection timeout responseThis indicates that the network policy is working correctly and regulates communication between the two pods.
As your AKS cluster grows in complexity and you deploy more applications and services in different pods, you may encounter scenarios where a basic network policy is no longer sufficient. In this case, you’ll need to restructure the policy to ensure it addresses the networking requirements of your cluster. A common example is enabling communication between specific pods and allowing external access to selected pods.
Microservices architecture has transformed the way applications are built and deployed. Each microservice, such as a core function or a dedicated application programming interface (API), operates independently in pods, communicating with other components and resources through a shared network infrastructure.
Managing the communication and traffic flow between the pods in the cluster is essential to ensure the entire system operates smoothly and securely. This may include detailing which pods communicate in the network. The rules can specify pod selectors to match specific pod names, labels, ports, or allowed IP addresses.
You can, for example, implement a policy that allows certain pods to communicate while restricting communication with others.
First, run the command below to create the third pod in your namespace, labeled as follows:
kubectl run <your pod3-name> --image=tutum/hello-world --namespace=<your-name-space> --labels app=hello-world,role=frontend
Update the YAML file with the following contents:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: backend-policy
namespace: <your-name-space>
spec:
podSelector:
matchLabels:
app: hello-world
role: backend
ingress:
- from:
- namespaceSelector: {}
podSelector:
matchLabels:
app: hello-world
role: frontend
In this case, the network policy specifically outlines the ingress rules that allow traffic exclusively from pods labeled app: hello-world, role: backend, and role: frontend. This configuration ensures that only pods with these specified labels are permitted to communicate with each other.
Run the command below to apply this network policy:
kubectl apply -f network-policy.yaml
Then, test the connection between the pods.
kubectl exec -it <pod3-name> -n <name-space> -- wget -qO- http://<pod2-IP>
Once the two pods establish a successful connection, you should be able to view the contents of an index.html file served by the hello-world application running on pod2.
Fig. 6: Advanced network policy example of a successful pod connection responseNow, create a new test pod without labeling it with the allowed labels:
kubectl run <pod4-name> --image=tutum/hello-world --namespace=<your-name-space>
Then, try to connect from the fourth pod to one of the pods with the allowed labels.
kubectl exec -it <new-pod4-name> --namespace=<name-space> -- wget -qO- http://<pod2-IP>
You should see a connection timeout message, as shown below:
Fig. 7: Advanced network policy example unsuccessful pod connection timeout responseSince the fourth pod doesn’t have the allowed labels, it isn’t allowed to communicate with pod2. This confirms that the network policy is working as intended — regulating the network traffic between the specified pods.
In certain situations, you want to enable external access to a particular pod or service outside the cluster. This is often necessary when dependencies extend beyond the Kubernetes environment.
There are several methods for allowing external access to pods in your clusters:
The load balancer method is a straightforward approach to exposing a pod service to the Internet. To allow external access using the AKS load balancer service, you must create a deployment manifest file and a service manifest file. The deployment manifest specifies the pods and their configuration, while the service manifest exposes the pods to external traffic.
Create a test-deployment.yaml manifest file with the following contents:
apiVersion: apps/v1
kind: Deployment
metadata:
name: hello-world-deployment
labels:
app: hello-world
spec:
replicas: 1
selector:
matchLabels:
app: hello-world
template:
metadata:
labels:
app: hello-world
spec:
containers:
- name: hello-world
image: tutum/hello-world
ports:
- containerPort: 80
Then, create a test-service.yaml manifest file and include the following contents:
apiVersion: v1
kind: Service
metadata:
name: hello-world-service
spec:
selector:
app: hello-world
ports:
- protocol: TCP
port: 80
targetPort: 80
type: LoadBalancer
Save and apply the two manifests
kubectl apply -f test-deployment.yaml
kubectl apply -f test-service.yaml
This creates a deployment hello-world image pod and exposes it as a LoadBalancer service.
To access the service externally, check its status, and get the external IP use:
kubectl get services <service-name>
The service obtains an external IP once it’s provisioned.
Fig. 8: Deployed services in namespaceNow, you can access hello-world using the external IP on your browser.
Remember to clean up the resources you created in this tutorial to avoid incurring unnecessary Azure charges. To do so, run the command below to delete your resource group, including its resources.
az group delete --name <your-resource-group-name>
Alternatively, follow these steps to quickly delete resources directly from the Azure portal:
If you want to keep specific resources while removing others, you can delete individual resources within the resource group instead of deleting the entire group. Simply click the All resources service and select the specific resources you want to delete.
One common challenge when working with network policies is ensuring that they’re applied correctly and aren’t causing unintended consequences.
To troubleshoot network policy issues, follow these common steps:
Take a look at this example:
Fig. 9: Applied network policiesYou can also use the kubectl logs <podname> -n <namespace> command to check the logs of your pods for any errors or warnings related to network connectivity.
Fig. 10: Logs for a specific podIt’s important to thoroughly test and validate your network policies in different scenarios to ensure that they’re working as expected and provide the desired level of security and performance for your AKS cluster.
This article discussed the fundamentals of network policies in AKS, including how to create and apply them to control the traffic flow and enforce security measures in AKS clusters. To further enhance your AKS network management capabilities, explore Site24x7’s Azure Kubernetes Service Monitoring Integration.
With this integration, you can proactively monitor your clusters’ infrastructure health, configure thresholds for various metrics, monitor resource use, and set up critical issue alerts for early detection and remediation. To start, sign up for a 30-day free trial account!
Write for Site24x7 is a special writing program that supports writers who create content for Site24x7 “Learn” portal. Get paid for your writing.
Apply Now