GitOps for Kubernetes with Nixidy and ArgoCD

6 hours ago 1

After getting into the Nix ecosystem, my whole experience with managing things changed. I found myself constantly asking, Can I use Nix for this? Can I declare that? Everything became centred around declarative configurations. Nix is like a black hole that pulls you in, consuming your energy and time as you joyfully re-imagine everything in the Nix way.

GitHub also often recommends interesting Nix tools to me. That’s how I discovered Nixidy. I’ve been aware of the Rendered Manifests Pattern for a while. ArgoCD’s Source Hydrator supports this pattern but initially I did not choose it for my homelab for three main reasons:

  • I don’t need a UI.
  • FluxCD has the dependsOn feature, which ArgoCD doesn’t.
  • FluxCD comes with built-in SOPS support.

Because of my choice of CD tool, I didn’t explore the Rendered Manifests Pattern much. Now, with Nixidy, I think it is time to revisit this pattern and see how it works in practise. In the post, I will walk you through how to start using Nixidy.

Prerequisite:
You need to have nix installed.

Preparation

To get started, let’s create some files first.

1
2
3
touch flake.nix
mkdir -p env/dev
touch env/dev/default.nix

For the flake.nix, add the following content:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
{
description = "ArgoCD configuration with nixidy.";

inputs.nixpkgs.url = "github:nixos/nixpkgs/nixos-unstable";
inputs.flake-utils.url = "github:numtide/flake-utils";
inputs.nixidy.url = "github:arnarg/nixidy";

outputs = {
self,
nixpkgs,
flake-utils,
nixidy,
}: (flake-utils.lib.eachDefaultSystem (system: let
pkgs = import nixpkgs {
inherit system;
};
in {
nixidyEnvs = nixidy.lib.mkEnvs {
inherit pkgs;

envs = {
dev.modules = [./env/dev];
};
};
packages.nixidy = nixidy.packages.${system}.default;

devShells.default = pkgs.mkShell {
buildInputs = [nixidy.packages.${system}.default];
};
}));
}

On env/dev/default.nix, update the repository information to point to your own repository.

1
2
3
4
5
{
nixidy.target.repository = "https://github.com/aufomm/nixidy-argocd.git";
nixidy.target.branch = "main";
nixidy.target.rootPath = "./manifests/dev";
}

YAML Apps

Next let’s create an application manually. We’ll create env/dev/httpbin.nix. This file is pretty straightforward: we define an applications called httpbin that watches a folder containing namespace, deployment, and service for httpbin. Writing these resources in Nix is very similar to writing the YAML directly, but with the added power of the Nix language for declaring resources.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
{
applications.httpbin = {
namespace = "httpbin";
createNamespace = true;
resources =
let
labels = {
"app.kubernetes.io/name" = "httpbin";
};
in
{
deployments.httpbin.spec = {
selector.matchLabels = labels;
template = {
metadata.labels = labels;
spec = {
containers.httpbin = {
image = "mccutchen/go-httpbin:v2.15.0";
};
};
};
};
services.httpbin-svc = {
spec = {
selector = labels;
ports.http = {
port = 80;
appProtocol = "http";
targetPort = 8080;
};
};
};
};
};
}

Now let’s modify /etc/dev/default.nix to include this file

1
2
3
4
5
6
...
nixidy.target.rootPath = "./manifests/dev";
imports = [
./httpbin.nix
];
}

Run nix run .#nixidy -- build .#dev and you should see the following result:

1
2
3
4
5
6
7
result/
├── apps
│ └── Application-httpbin.yaml
└── httpbin
├── Namespace-httpbin.yaml
├── Service-httpbin-svc.yaml
└── Deployment-httpbin.yaml

We can see the httpbin Application is set up to watch the generated YAML files in the httpbin folder, which includes the namespace, deployment, and service.

Helm Apps

Now that we know how to create resources, let’s see how to manage applications with Helm chart. We’ll deploy an ingress controller to route requests to our cluster.

Save the following to /env/dev/kong.nix:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
{ lib, ... }:

{
applications.kong = {
namespace = "kong";
createNamespace = true;
helm.releases.kong = {
chart = lib.helm.downloadHelmChart {
repo = "https://charts.konghq.com";
chart = "ingress";
version = " 0.20.0";
chartHash = "sha256-it5oEOcZ8AEV6fGrKlmSUwn00l7yD13mIRraWXnHCdA=";
};
values = {
gateway = {
image.tag = "3.10";
image.repository = "kong/kong-gateway";
admin.http.enabled = true;
env.router_flavor = "expressions";
};
};
};
};
}

Chart version

Nix is all about reproducibility, so it is required to pin the chart version hash. If we have the chart added to your local helm repo list, we can use helm search repo to get the version (make sure to run helm repo update first).

For example, with below command I know the latest version for kong/ingress chart is 0.20.0, we can use this version on the file.

1
2
3
➜ helm search repo kong/ingress
NAME CHART VERSION APP VERSION DESCRIPTION
kong/ingress 0.20.0 3.9 Deploy Kong Ingress Controller and Kong Gateway

I normally just go to artifacthub to check the chart version and see what options I have for the chart.

Next we need to determine the chartHash. The easiest way is to leave it as blank and then run nixidy build .#dev. Nix will calculate the hash for you, as shown below:

1
2
3
4
5
nix run .
warning: found empty hash, assuming 'sha256-AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA='
error: hash mismatch in fixed-output derivation '/nix/store/cgd1994dznajad75v7fj1y77xzs3b6jr-helm-chart-https-charts.konghq.com-ingress--0.20.0.drv':
specified: sha256-AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA=
got: sha256-it5oEOcZ8AEV6fGrKlmSUwn00l7yD13mIRraWXnHCdA=

Patch resource

If the helm chart does not provide certain features to generate YAML files, we can use resources to patch them. For example, if we want to add an annotation to the generated kong-gateway-proxy service and kong helm chart does not provide this feature out of box, we can add below to etc/dev/kong.nix.

1
2
3
applications.kong.resources = {
services.kong-gateway-proxy.metadata.annotations."metallb.universe.tf/loadBalancerIPs" = "192.168.18.150";
};

Sometimes we might be looking for some options to customize we application. Here I will show you two options:

Please check out the official documentation for all available options.

transformer

It is common the helm chart-generated resources often include many labels. Nixidy provides removeLabels function to remove labels. Combining it with transformer, you can remove labels from all generated resource for a specific helm releases.

Add below to etc/dev/kong.nix and generate the yamls again, we should see these labels are removed.

1
2
3
4
5
6
applications.kong.helm.releases.kong.transformer = map (
lib.kube.removeLabels [
"app.kubernetes.io/version"
"helm.sh/chart"
]
);

Sometimes we need to pass extra options to the Helm chart generation. For example, if we want to use Kubernetes Gateway API with Kong Ingress Controller, we need to make sure the Gateway API is installed before the Kong helm release. Otherwise necessary permissions will not be included on the created ClusterRole.

Since the Rendered Manifest Pattern evaluates everything independently, there is no kubernetes API to check at evaluation time. In this case we can use extraOpts to pass the --api-versions option to the helm template command.

Add below to etc/dev/kong.nix and generate the yamls again, we should see extra rules being added to ClusterRole kong-controller.

1
2
3
4
applications.kong.helm.releases.kong.extraOpts = [
"--api-versions"
"gateway.networking.k8s.io/v1"
];

Install CRDs

When we use helm to install a chart, sometimes CRDs are installed the first time you run helm install. However, helm does NOT manage these CRDs so there is no support for upgrading or deleting them using Helm.

As best practise, we should control which CRDs and versions are installed on our clusters. We can also do this with Nixidy.

Let’s say we want to install Kubernetes Gateway API 1.21 standard channel, what we need is to save below to env/dev/k8s-gateway-api.nix and then import it on the env/dev/default.nix.

1
2
3
4
5
6
7
8
9
10
11
12
13
let
k8s-crd = builtins.fetchurl {
url = "https://github.com/kubernetes-sigs/gateway-api/releases/download/v1.2.1/standard-install.yaml";
sha256 = "0r2dssfa38hh0yxmwmx07af4w7h9vqjdsas3qnsvjcrvmgv8nncp";
};
in
{
applications.k8s-gw-api-crds = {
yamls = [
(builtins.readFile k8s-crd)
];
};
}

As usual, we can leave sha256 blank and let Nix generates the hash for us.

Sometimes we might want to apply a YAML files (such as a CRD resource) which is not available in Nixidy so we can’t write it the same way as we define a service like resources.services.*.

In this case we can include these yamls in applications..yamls section.

Let me give you an example, let’s say I need to create GatewayClass and Gateway resource for Kong. What we can do is to add below on env/dev/kong.nix.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
{
applications.kong.yamls = [
(''
apiVersion: gateway.networking.k8s.io/v1
kind: GatewayClass
metadata:
annotations:
konghq.com/gatewayclass-unmanaged: "true"
name: kong
spec:
controllerName: konghq.com/kic-gateway-controller
'')
(''
apiVersion: gateway.networking.k8s.io/v1
kind: Gateway
metadata:
name: kong
namespace: kong
spec:
gatewayClassName: kong
listeners:
- allowedRoutes:
namespaces:
from: All
name: proxy
port: 80
protocol: HTTP
'')
];
}

YAML files added here will be parsed and added to application’s resources where they can be overwritten and modified.

Generate CRD resources

We’ve explored how to create core Kubernetes entities like deployments, services using Nixidy, and how to include YAMLs in the final output. But what if we want to create CRDs regularly in the nix way?

Gateway API Generator

We can go back to our flake.nix and add below.

1
2
3
4
5
6
7
8
9
10
11
12
packages.generators.gateway-api = nixidy.packages.${system}.generators.fromCRD {
name = "gateway-api";
src = pkgs.fetchFromGitHub {
owner = "kubernetes-sigs";
repo = "gateway-api";
rev = "v1.2.1";
hash = "sha256-jVW/8RhhZi50xscb/obtMbrDwZRE1BkDqah3rq+Mgvc=";
};
crds = [
"config/crd/standard/gateway.networking.k8s.io_httproutes.yaml"
];
};

This snippet creates the generators.gateway-api package, which generates the CRD for httproutes. The source is from GitHub, and we can also use the empty hash trick to calculate the hash.

Next, run:

1
nix build .

This will generate the module that we need as result in the same folder.

Let’s copy this file to env/dev/modules

1
2
mkdir -p env/dev/modules
cp result env/dev/modules/k8s-gateway-api.nix

Then import it back by adding the following to env/dev/default.nix.

1
2
3
nixidy.applicationImports = [
./modules/k8s-gateway-api.nix
];

HTTPRoute Resource

Now that the resource is ready, let’s create our HTTPRoute object. Add the following to env/dev/httpbin.nix under the resources section:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
...
hTTPRoutes.httpbin-bin-route = {
metadata.annotations."konghq.com/strip-path" = "true";
spec = {
parentRefs = [
{
group = "gateway.networking.k8s.io";
kind = "Gateway";
name = "kong";
namespace = "kong";
}
];
rules = [
{
matches = [
{
path = {
type = "PathPrefix";
value = "/demo";
};
}
];
backendRefs = [
{
name = "httpbin-svc";
kind = "Service";
port = 80;
}
];
}
];
};
};
...

Now, Nixidy will generate the HTTPRoute objects for us.

App of the Apps

Now we know how Nixidy generates plain YAML files for applications. We should also notice that Nixidy creates ArgoCD Applications in the apps/ folder. These applications watch and apply the YAML files in each application folders. We should have three Applications in the apps/ folder now.

Next, run nix run .#nixidy -- bootstrap .#dev to generate the “app of the apps” (the master app) that can be used to bootstrap the cluster.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: apps
namespace: argocd
spec:
destination:
namespace: argocd
server: https://kubernetes.default.svc
project: default
source:
path: ./manifests/dev/apps
repoURL: https://github.com/aufomm/nixidy-argocd.git
targetRevision: main
syncPolicy:
automated:
prune: true
selfHeal: true

We can store this master app as manifests/dev/bootstrap.yaml. When we apply it to the cluster, ArgoCD will start this Application first and then it will start the rest of other Applications accordingly.

ArgoCD

So far, we’ve focused on generating plain YAMLs and Applications for ArgoCD, let’s see how we can use Nixidy generate the YAML and let ArgoCD to manage itself once it’s up and running.

Following is the content of env/dev/argocd.nix:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
{ lib, ... }:

{
applications.argocd = {
namespace = "argocd";
createNamespace = true;
helm.releases.argocd = {
chart = lib.helm.downloadHelmChart {
repo = "https://argoproj.github.io/argo-helm/";
chart = "argo-cd";
version = "8.0.14";
chartHash = "sha256-75XQBHonKkx2u6msOqt8iwddenNbzbujDWRqGh1I66o=";
};
values = {
configs.secret.argocdServerAdminPassword = "$2a$10$XChYH1h/gF1UEdzapUzQ0Ovgmd8m7sN0uulF2.OKiQeyen2YOVogG";
server.service = {
type = "LoadBalancer";
annotations."metallb.universe.tf/loadBalancerIPs" = "192.168.18.159";
};
};
};
yamls = [
(builtins.readFile ./argocd-secrets.sops.yaml)
(''
apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
name: argocd-server-cert
namespace: argocd
spec:
secretName: argocd-server-tls
issuerRef:
name: lab-k8s-ca-issuer
kind: ClusterIssuer
dnsNames:
- argo.li.k8s
usages:
- digital signature
- key encipherment
- server auth
'')
];
};
}

Let me break this down for you.

SopsSecret

As mentioned earlier, ArgoCD does not support SOPS out of the box. To securely store sensitive data in the Git repository, we can use sops-secrets-operator to encrypt the data. If you do not know how to use this tool, please check my previous post.

Password

ArgoCD provides a UI, and I like to declare my password to initialize ArgoCD. According to this GitHub issue, we need to bcrypt-hash the password and set it in configs.secret.argocdServerAdminPassword. To generate this password, simply run:

1
htpasswd -nbBC 10 "" admin1234 | tr -d ':\n'

SSL

I have my own CA set up in my homelab and prefer to run everything with TLS enabled. Here I use a cluster CA issuer to manage SSL certificates in the cluster. As you can see I have a Certificate object as the resource with ArgoCD, so cert-manager is responsible for renewing this SSL certificate for ArgoCD.

Private Repository

If the repository is private, ArgoCD needs a way to pull it. I use a deploy key for this purpose.

First, generate your keys:

1
ssh-keygen -t ed25519 -f ./identity -N '' -C ''

Then, store the private key as a secret and use SopsSecret to encrypt and store the content as shown below:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
apiVersion: isindir.github.com/v1alpha3
kind: SopsSecret
metadata:
name: argocd-secrets-sops
namespace: argocd
spec:
suspend: false
secretTemplates:
- name: argocd-repo
type: Opaque
labels:
argocd.argoproj.io/secret-type: repository
stringData:
sshPrivateKey: |
-----BEGIN OPENSSH PRIVATE KEY-----
...
-----END OPENSSH PRIVATE KEY-----
type: git
url: [email protected]:aufomm/nixidy-argocd.git

This adds the Repository to ArgoCD. We can then upload the generated public key identity.pub to our git repository to allow ArgoCD to pull the content.

Summary

The Rendered Manifests Pattern is an interesting concept that offers greater clarity on exactly what changes will be applied to the cluster. If you care about being explicit in your deployments, I highly recommend checking out this pattern.

That’s all I wanted to share with you today, see you in the next one!

Read Entire Article