The pgadmin-config volume is defined as being part of the ConfigMap, whilst the pgadmin-data volume is defined as a template rather than an actual volume. The pgadmin-config mount is the JSON file containing the server definitions, and the pgadmin-data mount is a directory that pgAdmin uses to store configuration data, user preferences, session data, and other files. The remainder of the YAML is used to define the mounts and the volumes they'll use. In this example we're using the HTTP port, but we could also use the HTTPS port. Next, the network port used by the container is specified. The email address is hard-coded in the YAML, whilst the password is pulled from the secret we created earlier. We then define the environment variables for the container that will contain the email address and password for the administrator account that will be created on first-launch. Then the container to be deployed is defined we give it a name, specify the container image to pull from Docker Hub, and the policy for when to pull new images. Then, a number of parameters are defined to specify the service name, number of replicas (must be 1), update strategy and additional labels. The YAML to create the application looks like this: apiVersion: apps/v1įirst comes the metadata that defines the name of the application and adds the pgadmin label that the service will look for. Doing it that way may be preferable if you wish to decouple the persistent storage from the application. Whilst we'll only create one instance of pgAdmin for the reasons noted above in the paragraph about load balancers, using a StatefulSet is handy because persistent storage will be automatically provisioned.Ī Deployment could be used instead of the StatefulSet, but then we'd also need to define and create a persistent volume (PV) and persistent volume claim (PVC). These are perhaps more commonly used when deploying a stateful application that has multiple replicas, for example, a cluster of PostgreSQL servers. The final piece of the puzzle is a StatefulSet. This will not work as you expect! pgAdmin maintains a pool of database connections within each instance, and neither this or session data can be shared between multiple instances.Ĭreate the service as in the previous examples: $ kubectl apply -f pgadmin-service.yaml It may be tempting to set the service type to LoadBalancer here, and then run multiple replicas of pgAdmin. This YAML defines a service called pgadmin-service which can be accessed using TCP on port 80, targeting any pod with a label app=pgadmin. A service in Kubernetes is an abstract way to describe a logical set of pods (containing one or more containers) and a policy by which they can be accessed: apiVersion: v1 The ConfigMap is created in Kubernetes much as in the previous example: $ kubectl apply -f pgadmin-configmap.yaml This defines a ConfigMap called pgadmin-config, containing a piece of data (actually the JSON that will be read by pgAdmin) called servers.json. This will later be mounted as a file by the container when launched: apiVersion: v1 In this example we'll use the ConfigMap to inject a JSON file that contains a list of servers to register for use. This could include a config_local.py file or similar but there are other ways to handle the pgAdmin configuration options that we'll use later. Next we'll create a configuration file for pgAdmin. Assuming the YAML is saved to pgadmin-secret.yaml, we can create the secret using kubectl's apply option: $ kubectl apply -f pgadmin-secret.yaml The password is simply base64 encoded (in this case, it's SuperSecret). This will create a secret with the name pgadmin. The YAML to create the secret looks like this: apiVersion: v1 In the case of pgAdmin, we'll use it to store the initial password that will be set for the administrator. This is a way of storing sensitive information in Kubernetes for use as part of whatever is being deployed. Note that all the YAML below could be in a single file, however I've split it up into multiple files for convenience.įirst we'll create a secret. Regardless, I gained an understanding of how to deploy pgAdmin in Kubernetes, so here's how it works. Recently, a user ran into an issue when running under Kubernetes that I was unable to reproduce in Docker, so I spent some time learning how a pgAdmin deployment would work in that environment-and ironically it worked just fine I couldn't reproduce the bug! So virtually all of our experience has been using Docker. PgAdmin has long had a container distribution however the development team rarely used it, except when testing releases.
0 Comments
Leave a Reply. |
Details
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |