Playing around with's CI and Docker Registry

Dabbling with Gitlab

I've got a Gitlab Community Edition(CE) instance running at home with 3 runners on separate devices.

I'd like to say I've become familiar with the basics of Gitlab's ci, however I haven't done much with the continuous delivery part.

What I'd like to do with the CI part is how I could test a repo simultaneously on different versions of software/OS-s at the same time, without too much copy-pasting of same block of code.


  • testing the same ansible role with different releases of Debian
  • building docker images with different versions of Debian releases for comparison purposes

The only CD so far I have been doing at home is pushing my own docker images to dockerhub, which is not complicated at all.

Pushing Docker images from home

I've got a few images on Dockerhub, which I've been building, testing and pushing from home.

I've got daily scheduled jobs running in my private Gitlab instance that rebuild and push the docker images early in the morning.

However, this requires me to keep my own machines powered on all the time, even if I wanted to go away someplace.

I could just power them off while gone, but then the images wouldn't be rebuilt daily.

So now I came to the conclusion, it'd be smarter to utilize's free runner instances to build and push images.

At this time, I only publish on Dockerhub, but this way I could push both to's docker registry and dockerhub to provide a bit of redundancy for myself.

My only concern with building docker images on would be needing to store my Dockerhub login details in each repo's CI/CD vars settings, which would be really cumbersome to do manually.

Terraform and

Looking at Terraform's documentation in the providers section, there is a VCS/Gitlab provider.

Further reading up on these pages could save me some time in the future.

Gitlab repo playground

I made a dummy git repo on both on public and my private Gitlab instance.

The purpose of this repo is to test/play with Gitlab's CI.

What I've managed to do at this time was to build and push a docker image to Gitlab's internal CI without specifying any CI/CD vars in settings manually.

Here's the raw version of the Gitlab CI file that made this possible.

Here's a link to's doc page of predefined environment variables.

Some comments below for the vars/commands in that CI file above:

  • "$CI_REGISTRY_USER": this seems to return gitlab-ci-token in the build logs, yet this is the only var that let me log into Gitlab's docker registry, yet when pushing the docker image to Gitlab's registry, I'd get requested access to the resource is denied errors.

  • "$CI_REGISTRY": this returns the registry on the Gitlab instance if it's activated

  • echo "$CI_REGISTRY_PASSWORD" | docker login -u "$CI_REGISTRY_USER" "$CI_REGISTRY" --password-stdin: This is more secure than passing -p password to docker login for some reason

  • docker push $CI_REGISTRY/$GITLAB_USER_LOGIN/$CI_PROJECT_NAME:$CI_COMMIT_SHA: This seems to work only if the person initiating the build is also the owner of the project, because of $GITLAB_USER_LOGIN being: The login username of the user who started the job. Eg on my private instance, my projects are under a non-admin user. If I run a pipeline as a different user, the build fails.

  • $CI_PROJECT_NAMESPACE: The project namespace (username or groupname) that is currently being built. This seems to be a better choice than $GITLAB_USER_LOGIN to change in the docker push URL as it also works when a pipeline is launched by a different user.