Implementing Pod-to-Pod encryption with Istio Ambient Mesh

Date: 2025-04-20

DISCLAIMER: The contents in this article must NOT be considered as official CKS training material by any means and is provided solely for reference. CKS candidates are strongly encouraged to consult official CNCF training content such as LFS260: Kubernetes Security Essentials and study independently for the exam.

Encrypting data at rest and in transit should be part of any robust IT security strategy. Furthermore, a zero-trust approach to workload security mandates encryption in transit not only for north-south traffic but also east-west traffic. In Kubernetes, the latter corresponds to Pod-to-Pod communication.

This article and accompanying hands-on lab demonstrates Pod-to-Pod encryption with Istio Ambient Mesh, also known as ambient mode.

Lab: Securing workload communication with mTLS using Istio Ambient Mesh

This lab has been tested with Kubernetes v1.32 (Penelope) and Cilium 1.17.3. It aligns with the CKS domain “Minimize Microservice Vulnerabilities” in the updated curriculum effective April 2025.

Minimize Microservice Vulnerabilities (20%)

Prerequisites

Familiarity with Kubernetes is assumed. See LFS158x: Introduction to Kubernetes for a gentle introduction to Kubernetes.

Setting up your environment

A Linux environment with at least 4 vCPUs, 8GiB memory and sufficient available disk space capable of running Docker. This can be your own desktop/laptop if you’re a Linux user (like I am ;-), or a spare board (e.g. Raspberry Pi), physical server, virtual machine or cloud instance.

The reference environment is Ubuntu 24.04 LTS (Noble Numbat) so if you’re on a different Linux distribution, adapt apt-related commands with dnf / pacman / something else accordingly when installing system packages. Otherwise, the remaining instructions should be broadly applicable to most Linux distributions.

We’ll install the following tools:

Install Docker

sudo apt update && sudo apt install -y docker.io
sudo usermod -aG docker $USER

Log out and in again for group membership to take effect.

Check the Docker version:

docker --version

Sample output:

Docker version 26.1.3, build 26.1.3-0ubuntu1~24.04.1

Install kind

sudo wget -qO /usr/local/bin/kind https://kind.sigs.k8s.io/dl/v0.27.0/kind-linux-amd64
sudo chmod +x /usr/local/bin/kind

Check the kind version:

kind version

Sample output:

kind v0.27.0 go1.23.6 linux/amd64

Install kubectl

sudo wget -qO /usr/local/bin/kubectl https://dl.k8s.io/release/v1.32.3/bin/linux/amd64/kubectl
sudo chmod +x /usr/local/bin/kubectl

Check the kubectl version:

kubectl version --client

Sample output:

Client Version: v1.32.3
Kustomize Version: v5.5.0

Enable kubectl shell completion:

echo "source <(kubectl completion bash)" >> $HOME/.bashrc
source $HOME/.bashrc

Install Cilium CLI

We’ll use Cilium CLI to install Cilium as our CNI. Cilium provides advanced networking performance, security and observability features through the revolutionary eBPF kernel technology.

wget https://github.com/cilium/cilium-cli/releases/download/v0.18.3/cilium-linux-amd64.tar.gz
tar xvf cilium-linux-amd64.tar.gz
chmod +x ./cilium
sudo mv ./cilium /usr/local/bin/.

Check the Cilium CLI version:

cilium version --client

Sample output:

cilium-cli: v0.18.3 compiled with go1.24.2 on linux/amd64
cilium image (default): v1.17.2
cilium image (stable): v1.17.3

Enable shell completion for Cilium CLI:

echo "source <(cilium completion bash)" >> $HOME/.bashrc
source $HOME/.bashrc

Install Helm

wget https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz
tar xvf helm-v3.17.3-linux-amd64.tar.gz
chmod +x linux-amd64/helm
sudo mv linux-amd64/helm /usr/local/bin/.

Check the Helm version:

helm version

Sample output:

version.BuildInfo{Version:"v3.17.3", GitCommit:"e4da49785aa6e6ee2b86efd5dd9e43400318262b", GitTreeState:"clean", GoVersion:"go1.23.7"}

Enable shell completion for Helm:

echo "source <(helm completion bash)" >> $HOME/.bashrc
source $HOME/.bashrc

Spin up a cluster with kind

We’ll use the following kind configuration file to disable kube-proxy and use Cilium as our CNI with the kube-proxy replacement enabled in a later step.

---
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
name: istio-ambient-mtls
networking:
  disableDefaultCNI: true
  kubeProxyMode: "none"

Save the file above as kind-istio-ambient-mtls.yaml and create our cluster with the following command:

kind create cluster --config kind-istio-ambient-mtls.yaml

Install Cilium with cni.exclusive=false

We’ll install Cilium 1.17.3 with cni.exclusive set to false. This ensures that Cilium does not take complete ownership of the CNI-related directories and is a requirement for running Istio in ambient mode on top of Cilium.

cilium install \
    --version 1.17.3 \
    --set cni.exclusive=false

Wait for Cilium to become ready:

cilium status --wait

Install Istio in ambient mode with Helm

Istio Ambient Mesh consists of the following Helm charts:

  1. istio/base: shared Istio resources including CRDs
  2. istio/istiod: the Istio control plane (Pilot)
  3. istio/cni: the Istio CNI plugin. Required for ambient mode
  4. istio/ztunnel: Istio’s zero-trust tunnel (ztunnel) data plane for ambient mode

All Istio Helm charts support specifying the installation profile via the profile Helm option. For example, pass --set profile=ambient to each helm install command to install Istio with the ambient profile enabled.

Add the official Istio Helm repository and update repository metadata.

helm repo add istio https://istio-release.storage.googleapis.com/charts
helm repo update

Create the istio-system namespace if not exists.

kubectl create ns istio-system

Now install each chart in order, specifying the --set profile=ambient option to install each Istio component with the ambient profile enabled.

helm -n istio-system install \
    --set profile=ambient \
    base \
    istio/base \
    --version 1.25.2 \
    --wait
helm -n istio-system install \
    --set profile=ambient \
    istiod \
    istio/istiod \
    --version 1.25.2 \
    --wait
helm -n istio-system install \
    --set profile=ambient \
    cni \
    istio/cni \
    --version 1.25.2 \
    --wait
helm -n istio-system install \
    --set profile=ambient \
    ztunnel \
    istio/ztunnel \
    --version 1.25.2 \
    --wait

Create a meshed namespace and enforce mTLS

Let’s create a namespace istio-ambient-mtls for our meshed application.

kubectl create ns istio-ambient-mtls

Label our namespace with istio.io/dataplane-mode=ambient to mesh all workloads in the namespace.

kubectl label ns istio-ambient-mtls \
    istio.io/dataplane-mode=ambient

To enforce Pod-to-Pod encryption with mTLS for our meshed workloads in strict mode, we need to define a PeerAuthentication custom resource.

---
apiVersion: security.istio.io/v1
kind: PeerAuthentication
metadata:
  name: peer-auth-strict-mtls
  namespace: istio-ambient-mtls
spec:
  mtls:
    mode: STRICT

Save the file above as peer-auth-strict-mtls.yaml and apply the manifest.

kubectl apply -f peer-auth-strict-mtls.yaml

Verify our mTLS configuration is working

Create a simple NGINX deployment and expose it as a ClusterIP Service:

kubectl -n istio-ambient-mtls create deploy nginx \
    --image=nginx \
    --replicas=2 \
    --port=80
kubectl -n istio-ambient-mtls expose deploy nginx

Now create a simple curlpod which cURLs our NGINX service every second:

kubectl -n istio-ambient-mtls run curlpod \
    --image=curlimages/curl \
    --command=true \
    -- \
    /bin/sh -c "while true; do curl -is --fail --max-time 1 nginx; sleep 1; done"

Confirm that our curlpod can reach our NGINX service:

kubectl -n istio-ambient-mtls logs pod/curlpod | \
    grep "200 OK" | \
    tail -1

Sample output:

HTTP/1.1 200 OK

Now validate mTLS is working in the ztunnel logs:

kubectl -n istio-system logs ds/ztunnel | \
    grep "connection complete" | \
    grep "istio-ambient-mtls" | \
    tail -1

Sample output:

2025-04-20T03:08:37.820987Z info    access  connection complete src.addr=10.244.0.237:39816 src.workload="curlpod" src.namespace="istio-ambient-mtls" src.identity="spiffe://cluster.local/ns/istio-ambient-mtls/sa/default" dst.addr=10.244.0.230:15008 dst.hbone_addr=10.244.0.230:80 dst.workload="nginx-86c57bc6b8-npqfq" dst.namespace="istio-ambient-mtls" dst.identity="spiffe://cluster.local/ns/istio-ambient-mtls/sa/default" direction="outbound" bytes_sent=69 bytes_recv=853 duration="3ms"

Congratulations, you successfully enabled Pod-to-Pod encryption with Istio Ambient Mesh!

Demo

An asciicast recording for this hands-on lab is available on Asciinema.

Demo: Implementing Pod-to-Pod encryption with Istio Ambient Mesh

Concluding remarks and going further

We saw how Istio Ambient Mesh provides in-transit encryption of east-west traffic via mTLS which is an integral part of any robust IT security strategy adopting zero-trust principles for their workloads. If you would like to dive deeper into this topic or Kubernetes security in general, consider the following open-ended exercises:

  1. Which Istio components (if any) can be run with the restricted PSS profile enabled? If so, which Helm value(s) should be customized? By installing only those Istio components requiring elevated host access in the kube-system namespace, can you deploy the remaining components in the istio-system namespace and enforce the restricted PSS profile namespace-wide?
  2. Try setting the Cilium policy enforcement mode to always. What breaks (Note: some components in kube-system may break as well!)? Apply suitable Cilium network policies allowing just the minimum amount of network access to get our example working again and verify that our curlpod can still reach the NGINX service with mTLS enabled
  3. Do we really need a service mesh just for transparent Pod-to-Pod encryption? Learn about the various ways to enable Pod-to-Pod encryption with Cilium in the Isovalent lab Cilium Transparent Encryption with IPSec and WireGuard

I hope you enjoyed this article and stay tuned for updates ;-)

Subscribe: RSS Atom [Valid RSS] [Valid Atom 1.0]

Return to homepage