This document covers basic needs to work with Kyverno codebase.
It contains instructions to build, run, and test Kyverno.
- Open project in devcontainer
- Tools
- Building local binaries
- Building local images
- Pushing images
- Deploying a local build
- Code generation
- Debugging local code
- Profiling
- API Design
- Controllers Design
- Logging
- Feature Flags
- Reports Design
- Troubleshooting
- Selecting Issues
-
Clone the project to your local machine.
-
Make sure that you have the Visual Studio Code editor installed on your system.
-
Make sure that you have wsl(Ubuntu preferred) and Docker installed on your system and on wsl too (docker.sock (UNIX socket) file is necessary to enable devcontainer to communicate with docker running in host machine).
-
Open the project in Visual Studio Code, once the project is opened hit F1 and type wsl, now click on "Reopen in WSL".
-
If you haven't already done so, install the Dev Containers extension in Visual Studio Code.
-
Once the extension is installed, you should see a green icon in the bottom left corner of the window.
-
After you have installed Dev Containers extension, it should automatically detect the .devcontainer folder inside the project opened in wsl, and should suggest you to open the project in container.
-
If it doesn't suggest you, then press Ctrl + Shift + p and search "reopen in container" and click on it.
-
If everything goes well, the project should be opened in your devcontainer.
-
Then follow the steps as mentioned below to configure the project.
Building and/or testing Kyverno requires additional tooling.
We use make to simplify installing the tools we use.
Tools will be installed in the .tools folder when possible, this allows keeping installed tools local to the Kyverno repository.
The .tools folder is ignored by git and binaries should not be committed.
Note: If you don't install tools, they will be downloaded/installed as necessary when running
maketargets.
You can manually install tools by running:
make install-toolsTo remove installed tools, run:
make clean-toolsThe Kyverno repository contains code for three different binaries:
kyvernopre: Binary to update/cleanup existing resources in clusters. This is typically run as an init container before Kyverno controller starts.kyverno: The Kyverno controller binary.cli: The Kyverno command line interface.
Note: You can build all binaries at once by running
make build-all.
To build kyvernopre binary on your local system, run:
make build-kyverno-initThe binary should be created at ./cmd/kyverno-init/kyvernopre.
To build kyverno binary on your local system, run:
make build-kyvernoThe binary should be created at ./cmd/kyverno/kyverno.
To build cli binary on your local system, run:
make build-cliThe binary should be created at ./cmd/cli/kubectl-kyverno/kubectl-kyverno.
In the same spirit as building local binaries, you can build local docker images instead of local binaries.
ko is used to build images, please refer to Building local images with ko.
Building images uses repository tags. To fetch repository tags into your fork run the following commands:
git remote add upstream https://github.com/kyverno/kyverno
git fetch upstream --tagsWhen building local images with ko you can't specify the registry used to create the image names. It will always be ko.local.
Note: You can build all local images at once by running
make ko-build-all.
To build kyvernopre image on your local system, run:
make ko-build-kyverno-initThe resulting image should be available locally, named ko.local/github.com/kyverno/kyverno/cmd/initcontainer.
To build kyverno image on your local system, run:
make ko-build-kyvernoThe resulting image should be available locally, named ko.local/github.com/kyverno/kyverno/cmd/kyverno.
To build cli image on your local system, run:
make ko-build-cliThe resulting image should be available locally, named ko.local/github.com/kyverno/kyverno/cmd/cli/kubectl-kyverno.
Pushing images is very similar to building local images, except that built images will be published on a remote image registry.
ko is used to build and publish images, please refer to Pushing images with ko.
When pushing images you can specify the registry you want to publish images to by setting the REGISTRY environment variable (default value is ghcr.io).
When publishing images, we are using the following strategy:
- All published images are tagged with
latest. Images tagged withlatestshould not be considered stable and can come from multiple release branches or main. - In addition to
latest, dev images are tagged with the following pattern<major>.<minor>-dev-N-<git hash>whereNis a two-digit number beginning at one for the major-minor combination and incremented by one on each subsequent tagged image. - In addition to
latest, release images are tagged with the following pattern<major>.<minor>.<patch>-<pre release>. The pre release part is optional and only applies to pre releases (-beta.1,-rc.2, ...).
Authenticating to the remote registry is done automatically in the Makefile with ko login.
To allow authentication you will need to set REGISTRY_USERNAME and REGISTRY_PASSWORD environment variables before invoking targets responsible for pushing images.
Note: You can push all images at once by running
make ko-publish-allormake ko-publish-all-dev.
To push kyvernopre image on a remote registry, run:
# push stable image
make ko-publish-kyverno-initor
# push dev image
make ko-publish-kyverno-init-devThe resulting image should be available remotely, named ghcr.io/kyverno/kyvernopre (by default, if REGISTRY environment variable was not set).
To push kyverno image on a remote registry, run:
# push stable image
make ko-publish-kyvernoor
# push dev image
make ko-publish-kyverno-devThe resulting image should be available remotely, named ghcr.io/kyverno/kyverno (by default, if REGISTRY environment variable was not set).
To push cli image on a remote registry, run:
# push stable image
make ko-publish-clior
# push dev image
make ko-publish-cli-devThe resulting image should be available remotely, named ghcr.io/kyverno/kyverno-cli (by default, if REGISTRY environment variable was not set).
After building local images, it is often useful to deploy those images in a local cluster.
We use KinD to create local clusters easily, and have targets to:
If you already have a local KinD cluster running, you can skip this step.
To create a local KinD cluster, run:
make kind-create-clusterYou can override the k8s version by setting the KIND_IMAGE environment variable (default value is kindest/node:v1.29.1).
You can also override the KinD cluster name by setting the KIND_NAME environment variable (default value is kind).
To build local images and load them on a local KinD cluster, run:
# build kyvernopre image and load it in KinD cluster
make kind-load-kyverno-initor
# build kyverno image and load it in KinD cluster
make kind-load-kyvernoor
# build kyvernopre and kyverno images and load them in KinD cluster
make kind-load-allYou can override the KinD cluster name by setting the KIND_NAME environment variable (default value is kind).
To build local images, load them on a local KinD cluster, and deploy helm charts, run:
# build images, load them in KinD cluster and deploy kyverno helm chart
make kind-deploy-kyvernoor
# deploy kyverno-policies helm chart
make kind-deploy-kyverno-policiesor
# build images, load them in KinD cluster and deploy helm charts
make kind-deploy-allThis will build local images, load built images in every node of the KinD cluster, and deploy kyverno and/or kyverno-policies helm charts in the cluster (overriding image repositories and tags).
You can override the KinD cluster name by setting the KIND_NAME environment variable (default value is kind).
We are using code generation tools to create the following portions of code:
- Generating kubernetes API client
- Generating API deep copy functions
- Generating CRD definitions
- Generating API docs
Note: You can run
make codegen-allto build all generated code at once.
Based on the APIs golang code definitions, you can generate the corresponding Kubernetes client by running:
# generate clientset, listers and informers
make codegen-client-allor
# generate clientset
make codegen-client-clientsetor
# generate listers
make codegen-client-listersor
# generate informers
make codegen-client-informersThis will output generated files in the /pkg/client package.
Based on the APIs golang code definitions, you can generate the corresponding deep copy functions by running:
# generate all deep copy functions
make codegen-deepcopy-allor
# generate kyverno deep copy functions
make codegen-deepcopy-kyvernoor
# generate policy reports deep copy functions
make codegen-deepcopy-reportThis will output files named zz_generated.deepcopy.go in every API package.
Based on the APIs golang code definitions, you can generate the corresponding CRDs manifests by running:
# generate all CRDs
make codegen-crds-allor
# generate Kyverno CRDs
make codegen-crds-kyvernoor
# generate policy reports CRDs
make codegen-crds-reportThis will output CRDs manifests /config/crds.
Based on the APIs golang code definitions, you can generate the corresponding API reference docs by running:
# generate API docs
make codegen-api-docsThis will output API docs in /docs/crd.
Based on the APIs golang code definitions, you can generate the corresponding CRD definitions for helm charts by running:
# generate helm CRDs
make codegen-helm-crdsThis will output CRDs templates in /charts/kyverno/templates/crds.yaml.
Note: You can run
make codegen-helm-allto generate CRDs and docs at once.
Based on the helm charts default values:
You can generate the corresponding helm chart docs by running:
# generate helm docs
make codegen-helm-docsThis will output docs in helm charts respective README.md:
Note: You can run
make codegen-helm-allto generate CRDs and docs at once.
Running Kyverno on a local machine without deploying it in a remote cluster can be useful, especially for debugging purpose. You can run Kyverno locally or in your IDE of choice with a few steps:
- Create a local cluster
- You can create a simple cluster with KinD with
make kind-create-cluster
- You can create a simple cluster with KinD with
- Deploy Kyverno manifests except the Kyverno
Deployment- Kyverno is going to run on your local machine, so it should not run in cluster at the same time
- You can deploy the manifests by running
make debug-deploy
- There are multiple environment variables that need to be configured. The variables can be found in here. Their values can be set using the command
export $NAME=value - To run Kyverno locally against the remote cluster you will need to provide
--kubeconfigand--serverIParguments:--kubeconfigmust point to your kubeconfig file (usually~/.kube/config)--serverIPmust be set to<local ip>:9443(<local ip>is the private ip adress of your local machine)--backgroundServiceAccountNamemust be set tosystem:serviceaccount:kyverno:kyverno-background-controller--caSecretNamemust be set tokyverno-svc.kyverno.svc.kyverno-tls-ca--tlsSecretNamemust be set tokyverno-svc.kyverno.svc.kyverno-tls-pair
Once you are ready with the steps above, Kyverno can be started locally with:
go run ./cmd/kyverno/ --kubeconfig ~/.kube/config --serverIP=<local-ip>:9443 --backgroundServiceAccountName=system:serviceaccount:kyverno:kyverno-background-controller --caSecretName=kyverno-svc.kyverno.svc.kyverno-tls-ca --tlsSecretName=kyverno-svc.kyverno.svc.kyverno-tls-pairYou will need to adapt those steps to run debug sessions in your IDE of choice, but the general idea remains the same.
To profile Kyverno application running inside a Kubernetes pod, set --profile flag to true in install.yaml. The default profiling port is 6060, and it can be configured via profile-port.
--profile
Set this flag to 'true', to enable profiling.
--profile-port string
Enable profiling at given port, defaults to 6060. (default "6060")
You can get at the application in the pod by port forwarding with kubectl, for example:
$ kubectl -n kyverno get pod
NAME READY STATUS RESTARTS AGE
kyverno-admission-controller-57df6c565f-pxpnh 1/1 Running 0 20s
kyverno-background-controller-766589695-dhj9m 1/1 Running 0 20s
kyverno-cleanup-controller-54466dfbc6-5mlrc 1/1 Running 0 19s
kyverno-cleanup-update-requests-28695530-ft975 1/1 Running 0 19s
kyverno-reports-controller-76c49549f4-tljwm 1/1 Running 0 20sCheck the port of the pod you'd like to forward using the command below.
$ kubectl get pod kyverno-admission-controller-57df6c565f-pxpnh -n kyverno --template='{{(index (index .spec.containers 0).ports 0).containerPort}}{{"\n"}}'
9443Use the exposed port from above to run port-forward with the below command.
$ kubectl -n kyverno port-forward kyverno-admission-controller-57df6c565f-pxpnh 6060:9443
Forwarding from 127.0.0.1:6060 -> 9443
Forwarding from [::1]:6060 -> 9443The HTTP endpoint will now be available as a local port.
Alternatively, use a Service of the type LoadBalancer to expose Kyverno. An example Service manifest is given below:
apiVersion: v1
kind: Service
metadata:
name: pproc-service
namespace: kyverno
spec:
selector:
app: kyverno
ports:
- protocol: TCP
port: 6060
targetPort: 6060
type: LoadBalancerYou can then generate the file for the memory profile with curl and pipe the data to a file:
$ curl http://localhost:6060/debug/pprof/heap > heap.pprofGenerate the file for the CPU profile with curl and pipe the data to a file:
curl "http://localhost:6060/debug/pprof/profile?seconds=60" > cpu.pprofTo analyze the data:
go tool pprof heap.pprofSee docs/dev/api
See docs/dev/logging/logging.md
See docs/dev/reports
When you are ready to contribute, you can select issue at Good First Issues.