K8S ConfigMap by Helm limitation
After the development of a Docker image we wanted it to be deployed in Kubernetes using different databases as back-end, so we could easily test if the container works as expected. This product we containerized needs some config files and database drivers to be able to connect to a database back-end and the idea was born to use configMaps to contain these configuration files and database drivers.
Project Setup
The tryout project has the following setup and all Helm commands are ran from the root of this project.
This structure is pretty self explaining (I hope)
ConfigMap creation by using kubectl
Having not worked with configMaps that much the first thing was to determine if what we want is possible. According to kubectl documentation and some experimentation it was pretty straight forward to put the contents of a complete directory into a configMap. As the configuration and database drivers are spread over three different directories we need to create three configMaps. So the command below would create the config map idm-config-conf containing all files from the directory sample-config/mysql/conf/, which are currently two files. The same could be achieved for the directories sample-config/mysql/resolver/ and sample-config/mysql/bundle/ without any problems.
$ kubectl create configmap tst-config-conf --from-file=sample-config/mysql/conf/ $ kubectl create configmap tst-config-resolver --from-file=sample-config/mysql/resolver $ kubectl create configmap tst-config-bundle --from-file=sample-config/mysql/bundle/ $ kubectl get configmaps NAME DATA AGE tst-config-bundle 1 81s tst-config-conf 2 95s tst-config-resolver 1 64s
So from the above we can conclude that configMaps can be used to store the needed files and as we are using Helm for deployment we should now be able to configure Helm to do the same. Keep in mind that the bundle directory contains binary files!
Helm Configuration
First attempt
In my first attempt I just created a configMap definition in which I point to the directories containing the files which should become part of configMap, basically similar to what I did for the creation of the configMap using kubectl.
So I created a file configMap definition which reads the files from the same directory as when creating the configMap using kubectl, but the result was not what I expected. The configMap was empty.
apiVersion: v1 kind: ConfigMap metadata: name: test-idm-conf data: {{- (.Files.Glob "sample-config/mysql/conf/**.json").AsConfig | nindent 2 }}
With the above configuration in place I ran the command:
helm install --debug -f helm/values.yaml idm-idm helm
Which yielded the, for me, unexpected result
# Source: helm/templates/configmap.yaml apiVersion: v1 kind: ConfigMap metadata: name: test-idm-conf data: {}
After rereading the documentation it boiled down to the fact that Helm reads files starting from the root of the Helm Chart, in this case thus the directory called helm. As you can see the sample-config and helm directory are at the same level. Of course I changed the configMap to try to read from `../sample-config/mysql/conf/**.json` but this also didn't work.
The next thing I did was moving the sample-config directory into the helm directory and then I ran the Helm command again, this time I got a failure:
Error: create: failed to create: Request entity too large: limit is 3145728
So what is happening, this is not what I expected. I ran a slightly changed command to see what may be the issue
helm template --debug -f helm/values.yaml idm-idm helm
But this command runs just fine, it gives an output like this
install.go:173: [debug] Original chart version: "" install.go:190: [debug] CHART PATH: /k8s_configmaptest/helm --- # Source: helm/templates/configmap.yaml apiVersion: v1 kind: ConfigMap metadata: name: test-idm-conf data: datasource.jdbc-default.json: | { ... } repo.jdbc.json: | { ... }
The one thing which comes to mind is that somehow the file to deploy is to big, so I remove the database drivers from the sample-config directory for both MySQL and MSSQL. When I run the command again the configMap is deployed without any problem. I also tried to exclude the sample-config using .helmignore, than the deployment also works but the resulting configMaps are empty as there are not files to include.
So including the sample-config directory in the Helm chart is not an option, it results in a failure and personally don't like it as the same sample-config is used by our docker-compose for running local tests.
A remaining question is; what entity is too large?
Second attempt
The second attempt consist of using the --set-file option of the Helm command, it is described in the documentation as follows:
set values from respective files specified via the command line (can specify multiple or separate values with commas: key1=path1,key2=path2)
So in action it would boil down to a command like this:
$ helm install -f helm/values.yaml idm-idm helm --set-file "datasource=./sample-config/mysql/conf/datasource.jdbc-default.json" --set-file "repo=./sample-config/mysql/conf/repo.jdbc.json" --set-file "resolver=./sample-config/mysql/resolver/mysql.repo.properties" --set-file "bundle=./sample-config/mysql/bundles/mysql-connector-java-5.1.49.jar"
Now the configMap configuration needs to change, it should take the values defined on the command line instead of accessing the files directory which contain the files. Also the sample-config can be removed from the Helm chart directory (which means the project setup is as it was originally)
The new configMap configuration should be using the parameters defined on the command line as mentioned before, so after making the change the configMap configuration looks like this
apiVersion: v1 kind: ConfigMap metadata: name: test-idm-conf data: datasource.jdbc-default.json: {{ toJson .Values.datasource }} repo.jdbc.json: {{ toJson .Values.repo }} --- apiVersion: v1 kind: ConfigMap metadata: name: test-idm-resolver data: mysql.repo.properties: {{- .Values.resolver | nindent 4 }} --- apiVersion: v1 kind: ConfigMap metadata: name: test-idm-bundle binaryData: mysql-connector-java-5.1.49.jar: {{ .Values.bundle | b64enc }}
Now when the above helm command is ran I get the following error message
Error: create: failed to create: Request entity too large: limit is 3145728 helm.go:81: [debug] Request entity too large: limit is 3145728 create: failed to create
Something is to big, but what? When using the kubectl command to create the configMaps there was no issue at all, so it must be something Helm related. I tried to rerun the previous command but now added the --dry-run option to see how the resulting YAML would look. This gave me another error
NAME: idm-idm LAST DEPLOYED: Thu May 20 13:31:28 2021 NAMESPACE: default STATUS: pending-install REVISION: 1 TEST SUITE: None USER-SUPPLIED VALUES: Error: unable to write YAML output: error converting JSON to YAML: yaml: control characters are not allowed helm.go:81: [debug] error converting JSON to YAML: yaml: control characters are not allowed unable to write YAML output
So what is the cause of the first error, is it caused by the resulting YAML which seems to be invalid? So I removed the test-idm-bundle configMap from the definition and ran the same command again (without the --dry-run thus) which gave me the following error
Error: create: failed to create: Secret "sh.helm.release.v1.idm-idm.v1" is invalid: data: Too long: must have at most 1048576 bytes helm.go:81: [debug] Secret "sh.helm.release.v1.idm-idm.v1" is invalid: data: Too long: must have at most 1048576 bytes create: failed to create
Now the error has changed, so let's run that command again but then including the --dry-run option, this resulted in the 'unable to write YAML output' again. One thing I decided to do at this point was to clean up the Helm command, thus removing the unused part `--set-file "bundle=./sample-config/mysql/bundles/mysql-connector-java-5.1.49.jar`. Now running the altered command
helm install --debug -f helm/values.yaml idm-idm helm --set-file "datasource=./sample-config/mysql/conf/datasource.jdbc-default.json" --set-file "repo=./sample-config/mysql/conf/repo.jdbc.json" --set-file "resolver=./sample-config/mysql/resolver/mysql.repo.properties"
which finally ran successful and when checking which configMaps exists I see the ones I expected, meaning the command succeeded.
$ kubectl get configmaps NAME DATA AGE kube-root-ca.crt 1 20d test-idm-conf 2 48s test-idm-resolver 1 49s
So there must be some size limitation, or something is limiting size when using Helm to deploy larger files (maybe even larger Charts).
Lessons learned
What I learned from this is:
- Helm by default expects files to be placed in the root of the helm configuration directory
- That you can create configMaps containing binary files, next to for example property files.
- These binary files are located under a different section, called binary-data (instead of the section called data)
- There is a limit to the size of the (binary) data to be put in a configMap when using Helm, by the looks of the error it seems Helm is using a Secret to store information and a Secret is limited to a size of 1MB (some more information regarding this can be found here and here)
- There is also a size limit for the configMap itself, it also can contain only 1 MB of data
- The --set-file from the command line become values in the resulting YAML, thus even when the file is not used in the configMap the file contents is somehow part of the package and gets deployed causing a failure when breaching the limit (kind of guessing this)
- Indentation is an important thing (had many invalid YAML's when trying stuff out)
So in the end I was not able to achieve what I wanted to achieve, having a configMap created from a Helm chart containing configuration items needed to be able to run a product successfully.
So other options to make it work would be to either create the configMaps manually (either upfront or from a deploy pipeline) and in the Helm Chart decide which configMap to use or create an initContainer which places the files in the right location so the product can pick them up.
Environment setup
I setup the minikube environment using the following command:
$ minikube start --memory=5000 --cpus=3 --disk-size=40g --vm-driver=virtualbox --bootstrapper kubeadm --kubernetes-version=1.21.1
The versions of the tools I used are:
- minikube version: v1.20.0
- helm version: version.BuildInfo{Version:"v3.5.4", GitCommit:"1b5edb69df3d3a08df77c9902dc17af864ff05d1", GitTreeState:"dirty", GoVersion:"go1.16.3"}
- kubectl version: version.Info{Major:"1", Minor:"21", GitVersion:"v1.21.0", GitCommit:"cb303e613a121a29364f75cc67d3d580833a7479", GitTreeState:"clean", BuildDate:"2021-04-08T21:15:16Z", GoVersion:"go1.16.3", Compiler:"gc", Platform:"darwin/amd64"}
- Kubernetes version : version.Info{Major:"1", Minor:"21", GitVersion:"v1.21.1", GitCommit:"5e58841cce77d4bc13713ad2b91fa0d961e69192", GitTreeState:"clean", BuildDate:"2021-05-12T14:12:29Z", GoVersion:"go1.16.4", Compiler:"gc", Platform:"linux/amd64"}