In this tutorial, we’ll create an Azure AKS cluster and deploy Traefik, Cert-Manager, Argo CD, and the Kubernetes Guestbook example app. Argo CD is a GitOps-based deployment tool for Kubernetes, allowing you to manage application state through Git repositories. It continuously monitors the cluster and automatically applies updates when changes are pushed to Git, improving consistency, automation, and reliability in your CI/CD workflow.
First, we need a working Kubernetes setup.
Prerequisites:
We will need a Azure Account for this tutorial, If you don't have one, you can use the 30-Day Free Azure Account. After registering, set up the Azure CLI on your local machine and run az login to log in to your Azure account. Then setup kubectl by running az aks install-cli. We will also need Helm to be installed.
Later, we’ll also need a domain that we can point to our Kubernetes ingress. I’ll use my own, but you can easily use DuckDNS instead.
First we define some variables that we’ll use throughout the setup:
# Resource group name RESOURCE_GROUP="rg-aks-argocd-demo"# Azure region we want to use LOCATION="westeurope"# Azure AKS cluster name AKS_NAME="aks-argocd-demo"# Node size and count, we'll keep it as small as possible# standard_a2_v2: 2 Cores, 4GB RAM NODE_SIZE="standard_a2_v2" NODE_COUNT=1If this is a new subscription, we need to register the following resource providers for our Kubernetes cluster to work:
az provider register --namespace Microsoft.ContainerService az provider register --namespace Microsoft.Insights az provider register --namespace Microsoft.OperationalInsights az provider register --namespace Microsoft.NetworkThis may take a few minutes. Check the registration status with:
az provider list -o json | jq '.[] | select((.namespace=="Microsoft.ContainerService") or (.namespace=="microsoft.insights") or (.namespace=="Microsoft.OperationalInsights") or (.namespace=="Microsoft.Network")) | "\(.namespace): \(.registrationState)"' -rNow we can create a resource group:
az group create \ --name $RESOURCE_GROUP \ --location $LOCATIONAnd create our AKS cluster:
az aks create \ --resource-group $RESOURCE_GROUP \ --name $AKS_NAME \ --node-vm-size $NODE_SIZE \ --node-count $NODE_COUNT \ --enable-managed-identity \ --enable-aad \ --enable-azure-rbac \ --network-plugin azure \ --network-policy azure \ --enable-addons monitoring \ --generate-ssh-keysWait for it to finish provisioning.
Congratulations! We’ve created a Kubernetes cluster 🎉
Next, we need to add the context to kubectl:
az aks get-credentials \ --resource-group $RESOURCE_GROUP \ --name $AKS_NAMESince we’ve enabled Azure AD and Azure RBAC, we also need to add our user to the Azure Kubernetes Service RBAC Cluster Admin role:
USER_ID=$(az ad signed-in-user show --query id -o tsv) az role assignment create \ --assignee $USER_ID \ --role "Azure Kubernetes Service RBAC Cluster Admin" \ --scope $(az aks show --resource-group $RESOURCE_GROUP --name $AKS_NAME --query id -o tsv)Wait a minute for the role assignment to propagate.
Now verify that we’re connected:
kubectl get nodes kubectl get pods -AThe cluster is now ready for deployments.
Before we can deploy Argo CD, we need to set up an ingress to reach it securely. We’ll use Traefik for this combined with cert-manager for automate TLS certificate management.
Let’s start with cert-manager.
Create a namespace and deploy cert-manager:
kubectl create namespace cert-manager helm install \ cert-manager oci://quay.io/jetstack/charts/cert-manager \ --version v1.19.1 \ --namespace cert-manager \ --create-namespace \ --set crds.enabled=trueNext, we need to configure it. Create a new folder called bootstrap inside create another new folder called ingress.
bootstrap └── ingress Now create a new file inside these folders, bootstrap/ingress/cluster-issuer-staging.yaml (replace the email address with your own):
apiVersion: cert-manager.io/v1kind: ClusterIssuermetadata: name: letsencrypt-stagingspec: acme: server: https://acme-staging-v02.api.letsencrypt.org/directoryemail: [email protected]# replace!privateKeySecretRef: name: letsencrypt-staging-keysolvers: - http01: ingress: class: traefikApply the configuration:
kubectl apply -f bootstrap/ingress/cluster-issuer-staging.yamlNow, let’s deploy our Traefik ingress.
Add the Helm repo, create a namespace, and deploy Traefik:
helm repo add traefik https://traefik.github.io/charts helm repo update kubectl create namespace traefik helm install traefik traefik/traefik \ --namespace traefik \ --set ingressClass.enabled=true \ --set ingressClass.isDefaultClass=true \ --set service.type=LoadBalancerAfter a successful deployment, we’ll need our public IP address:
kubectl get svc -n traefik traefikCreate a new DNS A record with the value of EXTERNAL-IP. Mine for example is: https://argocd.demo.k8s.stack-dev.de
Test it with:
nslookup argocd.demo.k8s.stack-dev.deNow we can deploy Argo CD.
We'll turn off auto redirect to https in Argo CD since we are using traefik as reverse proxy in front of it, therefore we need to create some files. First create two new folders called argocd, inside of bootstrap and another inside of argocd called argocd-kustomize.
bootstrap └── argocd └── argocd-kustomize We create two new files, bootstrap/argocd/argocd-kustomize/kustomization.yaml:
apiVersion: kustomize.config.k8s.io/v1beta1kind: Kustomizationnamespace: argocdresources: - https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/install.yamlpatches: - path: argocd-cmd-params-cm-patch.yamltarget: kind: ConfigMapname: argocd-cmd-params-cmand a file called bootstrap/argocd/argocd-kustomize/argocd-cmd-params-cm-patch.yaml:
apiVersion: v1kind: ConfigMapmetadata: name: argocd-cmd-params-cmdata: server.insecure: "true"What we are doing here is patching the official install chart to set the default connection to http using kustomize.
Create a namespace and deploy it:
kubectl create namespace argocd kubectl apply -k bootstrap/argocd/argocd-kustomize/We can watch the deployment with:
kubectl get pods -n argocd -wTo reach it we need to create a ingress with a staging configuration for Argo CD to test our setup and avoid Let’s Encrypt rate limits. Create a new file called bootstrap/argocd/argocd-ingress.yaml:
apiVersion: networking.k8s.io/v1kind: Ingressmetadata: name: argocd-server-ingressnamespace: argocdannotations: cert-manager.io/cluster-issuer: "letsencrypt-staging"traefik.ingress.kubernetes.io/router.middlewares: traefik-redirect-to-https@kubernetescrdspec: ingressClassName: traefiktls: - hosts: - argocd.demo.k8s.stack-dev.de # replace!secretName: argocd-server-tlsrules: - host: argocd.demo.k8s.stack-dev.de # replace!http: paths: - path: /pathType: Prefixbackend: service: name: argocd-serverport: number: 80Now we want to add a http to https redirect on our ingress, create a file called: bootstrap/ingress/traefik-redirect-middleware.yaml
apiVersion: traefik.io/v1alpha1kind: Middlewaremetadata: name: redirect-to-httpsnamespace: traefikspec: redirectScheme: scheme: httpspermanent: trueApply the staging config:
kubectl apply -f bootstrap/ingress/traefik-redirect-middleware.yaml kubectl apply -f bootstrap/argocd/argocd-ingress.yamlCheck the certificate status:
kubectl get certificate -n argocd -wOnce all services show as Running, open the domain we configured earlier, e.g.: https://argocd.demo.k8s.stack-dev.de
Note: You’ll see a certificate warning because we’re still using a staging certificate.
Get the Argo CD admin password with:
kubectl -n argocd get secret argocd-initial-admin-secret -o jsonpath="{.data.password}"| base64 --decode &&echoWe can now log in and change the password.
Finally, we’ll replace the staging TLS certificate with a production one.
Create a file called bootstrap/ingress/cluster-issuer-prod.yaml:
apiVersion: cert-manager.io/v1kind: ClusterIssuermetadata: name: letsencryptspec: acme: server: https://acme-v02.api.letsencrypt.org/directoryemail: [email protected]# replace!privateKeySecretRef: name: letsencrypt-prod-keysolvers: - http01: ingress: class: traefikApply the production configuration:
kubectl apply -f bootstrap/ingress/cluster-issuer-prod.yamlThen edit bootstrap/argocd/argocd-ingress.yaml and change cert-manager.io/cluster-issuer from "letsencrypt-staging" to "letsencrypt":
apiVersion: networking.k8s.io/v1kind: Ingressmetadata: name: argocd-server-ingressnamespace: argocdannotations: cert-manager.io/cluster-issuer: "letsencrypt"traefik.ingress.kubernetes.io/router.middlewares: traefik-redirect-to-https@kubernetescrdspec: ingressClassName: traefiktls: - hosts: - argocd.demo.k8s.stack-dev.de # replace!secretName: argocd-server-tlsrules: - host: argocd.demo.k8s.stack-dev.de # replace!http: paths: - path: /pathType: Prefixbackend: service: name: argocd-serverport: number: 80Apply the updated configuration and delete the old certs:
# Delete the staging certificate and secret kubectl delete certificate argocd-server-tls -n argocd kubectl delete secret argocd-server-tls -n argocd kubectl apply -f bootstrap/argocd/argocd-ingress.yamlCheck the certificate status:
kubectl get certificate -n argocd -wThen visit your domain again, it should now have a valid TLS certificate (You might need to clear the cache).
Now we can use Argo CD to deploy our first application. For testing, we'll use the Kubernetes Guestbook Example.
First, create a new DNS A record for our guestbook: guestbook.demo.k8s.stack-dev.de
It should point to the same EXTERNAL-IP.
After creating the DNS entry, we need a git repository. Create one on e.g. GitHub and check in all files of your tutorial working directory.
Our guestbook application consists of a simple PHP frontend that stores data in Redis. We've adapted the official Kubernetes guestbook example to work with our AKS cluster.
The application structure looks like this:
app └── guestbook ├── dev │ ├── guestbook-ingress.yaml │ ├── guestbook-ui-deployment.yaml │ ├── guestbook-ui-svc.yaml │ ├── kustomization.yaml │ ├── redis-deployment.yaml │ └── redis-svc.yaml └── application.yaml Let's create all files:
- Argo CD Application definition:
app/guestbook/application.yaml
apiVersion: argoproj.io/v1alpha1kind: Applicationmetadata: name: guestbook-argo-applicationnamespace: argocdspec: project: defaultsource: repoURL: https://github.com/mietzen/AKS-ArgoCD-tutorial # Replace with your own repo!targetRevision: HEADpath: app/guestbook/devdestination: server: https://kubernetes.default.svcnamespace: guestbooksyncPolicy: syncOptions: - CreateNamespace=trueautomated: selfHeal: trueprune: true- Kustomize configuration:
app/guestbook/dev/kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1kind: Kustomizationresources: - guestbook-ui-deployment.yaml - guestbook-ui-svc.yaml - guestbook-ingress.yaml - redis-deployment.yaml - redis-svc.yaml- Ingress definition:
app/guestbook/dev/guestbook-ingress.yaml
apiVersion: networking.k8s.io/v1kind: Ingressmetadata: name: guestbook-ui-ingressnamespace: guestbookannotations: cert-manager.io/cluster-issuer: "letsencrypt"traefik.ingress.kubernetes.io/router.middlewares: traefik-redirect-to-https@kubernetescrdspec: ingressClassName: traefiktls: - hosts: - guestbook.demo.k8s.stack-dev.de # replace with your domain!secretName: guestbook-ui-tlsrules: - host: guestbook.demo.k8s.stack-dev.de # replace with your domain!http: paths: - path: /pathType: Prefixbackend: service: name: guestbook-uiport: number: 80- Redis instance
app/guestbook/dev/redis-deployment.yaml
apiVersion: apps/v1kind: Deploymentmetadata: name: redislabels: app: redisspec: replicas: 1selector: matchLabels: app: redistemplate: metadata: labels: app: redisspec: containers: - name: redisimage: redis:7-alpineresources: requests: cpu: 50mmemory: 64Miports: - containerPort: 6379- Redis services (both leader and follower pointing to the same pod):
app/guestbook/dev/redis-svc.yaml
apiVersion: v1kind: Servicemetadata: name: redis-leaderlabels: app: redisspec: ports: - port: 6379targetPort: 6379selector: app: redis --- apiVersion: v1kind: Servicemetadata: name: redis-followerlabels: app: redisspec: ports: - port: 6379targetPort: 6379selector: app: redisNote: We create both redis-leader and redis-follower services pointing to the same Redis pod because the guestbook frontend expects both service names (writes go to leader, reads go to follower). This simplified setup is due to our development environment with limited resources.
After creating all these files, add, commit and push them to your GitHub Repo.
To deploy our application definition to Argo CD we need to use kubectl apply one last time:
kubectl apply -f app/guestbook/application.yamlWe can now go to your Argo CD instance, e.g. https://argocd.demo.k8s.stack-dev.de and check out the deployment:
You should see:
- One
guestbook-uideployment (with a replicaset and a pod) - One
redisdeployment (with a replicaset and a pod) - Services:
guestbook-uiredis-leaderredis-follower
- Ingress:
guestbook-ui-ingress- Certificate:
guestbook-ui-tls
- Certificate:
Once all pods are running and the certificate is ready, visit your guestbook e.g.: https://guestbook.demo.k8s.stack-dev.de
You should see the guestbook interface where you can:
- Submit messages
- View all submitted messages
- See messages persist in Redis
Thanks to Argo CD's GitOps approach, any changes we push to your Git repository will automatically be synced to the cluster (due to selfHeal: true and prune: true in the sync policy).
To make changes:
- Edit the YAML files in
app/guestbook/dev/ - Commit and push to your repository
- Argo CD will detect the changes and automatically sync
- Watch the sync in the Argo CD UI
In a real GitOps setup, changes would normally go through a pull request, review, and automated validation process before being merged. Updates are often deployed to staging first, then promoted to production using the same Git workflow. While this tutorial keeps things simple, these practices enable safer, traceable, and fully automated deployments.
But we can test this by e.g. changing the redis image from redis:7-alpine to redis:8-alpine. After we push this change we can watch Argo CD pick up that change and deploy a new redis instance.
Congratulations! You've successfully deployed a GitOps-managed application on AKS using Argo CD 🎉
If you are finished testing, we can delete the cluster and local configuration executing the following code.
Remove user, cluster and context from kubectl config:
KUBECTL_USER=$(kubectl config get-users | grep "${RESOURCE_GROUP}_${AKS_NAME}") kubectl config delete-cluster "$AKS_NAME" kubectl config unset users."$KUBECTL_USER" kubectl config delete-context "$AKS_NAME"To remove the whole resource group including the cluster:
az group delete --name $RESOURCE_GROUP --yes

