Deploying Certdog to Kubernetes
From version: 1.17.0
The following guide contains the required steps to deploy Certdog to an existing Kubernetes cluster.
Pre-requisites
You will need an existing Kubernetes cluster to install Certdog onto. Certdog 1.17 has been tested against Kubernetes v1.34 and v1.35, compatibility with other versions can not be guaranteed.
You will also need permissions to access the Certdog images via the private registry in which they are hosted.
The following tools will be required for the installation:
kubectlhelm, if you use the Helm chart instead of plain manifests
Check kubectl is pointing at the right cluster using kubectl config get-context. helm uses the same configuration as kubectl, so updating kubectl’s configuration is sufficient to also point helm at the correct cluster.
1. Choose between a managed or external database
Certdog requires a MongoDB database to store all data used by the various components.
When deploying Certdog to Kubernetes, you can choose to either let Certdog manage an instance of MongoDB Community Server itself (managed), or provide the URI of a database you have created yourself (external).
1a. Set up a managed database
To use a managed database, you will first need to install the MongoDB Controllers for Kubernetes (MCK) Operator on your cluster. Follow the documentation provided by MongoDB for detailed installation instructions and possible configuration options.
If you have helm you can run the following commands to install the MCK Operator:
helm repo add mongodb https://mongodb.github.io/helm-charts
helm repo update
helm install mongodb-kubernetes-operator mongodb/mongodb-kubernetes
Instructions for installation from plain manifests are available in the MongoDB docs linked above.
After the MCK operator is installed, deploy a MongoDBCommunity resource to create the database.
This can be done with plain manifests provided by Support, or by providing the relevant settings in the Helm chart, under db.managed.
For each database user you will need to provide a secret containing the password for that user under password.
The Helm chart by default creates a database admin for management, and a database user which the components use to access the database. The secrets for these users are by default autogenerated at installation, but this can be disabled by setting db.managed.adminPasswordSecret.create or db.managed.userPasswordSecret.create to false. For preexisting secrets or to change the name of the secrets in which the passwords are generated, you may set db.managed.adminPasswordSecret.name and db.managed.userPasswordSecret.name.
1b. Set up an external database
To use an external database, you only need to provide the URI of the database to each component.
You should provide this URI through a secret, under a key named connectionString.standardSrv. This can, contrary to the name, be either the SRV or standard connection string, but it is recommended to use the SRV format.
This URI should include:
- The user credentials
- Authentication details such as the auth source
- The name of the database
E.g. mongodb+srv://user:password@db.example.com:27017/certdog?authSource=admin
If you are using the Helm chart, point Certdog to the secret holding the database URI with db.external.userUriSecret in your values.yaml file.
2. Create the master password for components
Most components in Certdog require a master password to protect data. This is provided via a secret, with the master password under password.
In the Helm chart, this will be autogenerated by default. To override this, you can set masterPasswordSecret.create to false, and provide the name of the secret under masterPasswordSecret.name.
3. Deploy the Certdog components to Kubernetes
Before deploying, the components can be customised by either editing the plain manifests provided by Support, or editing a values.yaml file for the Helm chart.
To deploy the components with plain manifests, use:
kubectl apply -f /path/to/manifests
To deploy the components with Helm, use:
helm install -f values.yaml <release-name> <path-or-url-to-chart>
Contact support for more information on where to access the chart and required images.
4. Making the cluster production-ready
For production use, there are extra considerations that have to be made, such as encrypting secrets at rest, access control, and mutual TLS between services in zero-trust environments.
See the Kubernetes documentation for information on bringing your cluster up to production standards.