As you’ve probably heard, I joined GitLab on September 1st as a Senior Solution Architect. I’m very happy about this move and looking forward to working with our partners and customers. As a result, I’ll be publishing more blog posts about GitLab’s AI-native DevSecOps platform here on viktorious.nl. Stay tuned if you want to learn more—I’ll start with some basics, but the goal is to cover more advanced topics as well.
At the core of GitLab’s DevSecOps platform, we find some very powerful CI/CD capabilities built around the .gitlab-ci.yml file that lives in the code repository of your project. In this blog post, I’ll explain how CI/CD works in GitLab and show you some of the powerful features it offers! This is part 1 of a series of blog posts that is focused on some basic CI/CD stuff; more advanced CI/CD topics as well as the integration with GitLab’s AI capabilities (called GitLab Duo) will be covered in follow up blog posts.
To get started with GitLab CI/CD, you need to create a .gitlab-ci.yml file in the root of your GitLab project/source code repo. It’s a YAML file with its own custom syntax that defines the pipeline(s) executed as part of the Continuous Integration and/or Continuous Delivery/Deployment process. A typical pipeline consists of stages and jobs that are executed as part of the pipeline.
You’ll need runners, which are the agents that execute the jobs defined in your pipeline. Runners can either be self-managed (running in your own infrastructure and managed by you) or can be consumed from GitLab’s shared infrastructure (hosted runners). I’ll talk more extensively about runners in a future article—this blog post will specifically focus on the CI/CD flow and the pipelines you can configure.
Before diving in, some prerequisites for this article are:
- A GitLab account (GitLab.com or self-managed)
- Basic knowledge of YAML syntax
- (Optional) A Kubernetes cluster for the deployment section
- (Optional) Docker knowledge for the container build section
About Stages and Jobs
As mentioned previously, a typical GitLab pipeline consists of Stages and Jobs:
- Stages define the order of execution, typical stages are for example build, test and deploy.
- Jobs are executed within the context of a stage and are used to compile code, do a test and/or run a security scan.
In GitLab a pipeline is defined in a .gitlab-ci.yml file:
stages: # List of stages for jobs, and their order of execution
- build
- test
- deploy
build-job: # This job runs in the build stage, which runs first.
stage: build
script:
- echo "Compiling the code..."
- echo "Compile complete."
unit-test-job: # This job runs in the test stage.
stage: test # It only starts when the job in the build stage completes successfully.
script:
- echo "Running unit tests... This will take about 60 seconds."
- sleep 60
- echo "Code coverage is 90%"
lint-test-job: # This job also runs in the test stage.
stage: test # It can run at the same time as unit-test-job (in parallel).
script:
- echo "Linting code... This will take about 10 seconds."
- sleep 10
- echo "No lint issues found."
deploy-job: # This job runs in the deploy stage.
stage: deploy # It only runs when *both* jobs in the test stage complete successfully.
environment: production
script:
- echo "Deploying application..."
- echo "Application successfully deployed."
As defined in the script, jobs within a stage run concurrently. After all the jobs in a stage completed, the CI/CD process will move on to the next stage and their respective jobs. The example Pipeline (that is not doing that much right now), is the default example you get when you open the GitLab pipeline editor for the first time and no actual .gitlab-ci.yml file exist.

This pipeline is just producing some log messages and not doing that much ;). You’ll get a lot of green checkboxes, so you can verify your pipeline is doing ok :).

Implement a simple unit test
Let’s make the pipeline a bit more useful and add some functionality so it’s actually doing something. In this example the pipeline is used in the context of the NodeJS Express template – feel free to join and try along :). I’ve changed the stages a bit, and not all stages are implemented yet – let’s have a look at our updated .gitlab-ci.yml:
# Example pipeline
image: node:21-alpine
# List of stages for jobs, and their order of execution
stages:
- install
- test
- build
- deploy
cache:
key:
files:
- package-lock.json
paths:
- node_modules/
install:
stage: install
script:
- echo "Running npm ci and saving to ./node_modules"
- npm ci #npm clean install
- echo "Clean install completed"
unit-test-job: # This job runs in the test stage.
stage: test # It only starts when the job in the build stage completes successfully.
script:
- echo "Running unit tests as defined in ./tests"
- npm test
- echo "Test(s) completed"
So, we now have a install stage and job that runs npm ci (npm clean install). We also have defined a cache, that caches what npm ci creates so it can be used in subsequent stages/jobs (also read this on caches versus artifacts in GitLab’s documentation). We also have implemented our unit tests job, that runs a npm test – the cached contents created in the install job is used here. The cache is made unique based on the file hash of the package-lock.json file, if this file is updated the cache needs to be refreshed.

Build a container
Now we’re ready to build a container that runs our application. For this, we’ll add a build-and-push-container job to our build stage. We’re just using Docker here, this means we will need a Dockerfile that defines the setup of the container:
FROM node:21-alpine WORKDIR /usr/src/app COPY . . RUN npm install EXPOSE 80 CMD [ "npm", "start" ]
And in the .dockerignore we have:
node_modules
The build and push container step now looks like:
build-and-push-container:
stage: build
image: docker:28.4.0
services:
- docker:28.4.0-dind
variables:
DOCKER_TLS_CERTDIR: ""
IMAGE_TAG: $CI_REGISTRY_IMAGE:$CI_COMMIT_REF_SLUG-$CI_COMMIT_SHORT_SHA
script:
- echo "Building and pushing Docker image..."
- docker login -u $CI_REGISTRY_USER -p $CI_REGISTRY_PASSWORD $CI_REGISTRY
- docker build -t $IMAGE_TAG .
- docker push $IMAGE_TAG
As you can see, we are using some predefined variables to create a unique ID for our container, and to login to our (GitLab hosted) container registry:
- $CI_REGISTRY_IMAGE = Base address for the container registry including group/project folder, for example registry.gitlab.example.com/my_group/my_project.
- $CI_COMMIT_REF_SLUG = The lowercase, max 63 bytes, version of the $CI_COMMIT_REF_NAME variable (0-9 and a-z are replaced with -).
- $CI_COMMIT_SHORT_SHA = The first eight characters of CI_COMMIT_SHA.
- $CI_REGISTRY_USER = The username to push containers to the project’s GitLab container registry.
- $CI_REGISTRY_PASSWORD = The password to push containers to the GitLab project’s container registry. The password is the same as $CI_JOB_TOKEN and is valid only as long as the job is running.
- $CI_REGISTRY = Address of the container registry (no groups/folder), formatted as registry.gitlab.example.com
Running the updated pipeline results in a container being published to the GitLab registry, that you can see under Deploy->Container Registry.

Note: We disable TLS (DOCKER_TLS_CERTDIR: “”) for simplicity in this example. In production, consider enabling TLS for enhanced security.
Deploy the container to Kubernetes
There are different options to deploy your container to Kubernetes. On the GitLab documentation website, a GitOps approach using FluxCD is explained. In this scenario the pipeline builds the OCI image, and FluxCD check for new images in the repository and deploys them if there any changes. A similar setup can be achieved using ArgoCD, explained here and here. I will also have a look on how to connect K8S to GitLab, but this is also for a future post.
For the sake of “simplicity’ (simple as in, easy to understand without knowing the GitOps tooling) I will just do a manual deployment to Kubernetes. This means I need a little bit of yaml for the actual Kubernetes deployment, service and ingress configuration. The Kubernetes yaml files are saved in a k8s folder that is part of my project.
The actual deployment to Kubernetes is done using the following script:
deploy-production:
image: alpine/k8s:1.34.1
stage: deploy
environment:
name: production
url: http://$FQDN_PRODUCTION
variables:
KUBE_NAMESPACE: $KUBE_NAMESPACE_PRODUCTION
IMAGE_TAG: $CI_REGISTRY_IMAGE:$CI_COMMIT_REF_SLUG-$CI_COMMIT_SHORT_SHA
FQDN: $FQDN_PRODUCTION
script:
- apk add --no-cache gettext
- kubectl config use-context ${KUBE_CONTEXT}
- kubectl create namespace ${KUBE_NAMESPACE} --dry-run=client -o yaml | kubectl apply -f -
- kubectl delete secret gitlab-registry-auth -n ${KUBE_NAMESPACE} || echo "Cannot delete - secret doesn't exist"
- kubectl create secret docker-registry gitlab-registry-auth -n ${KUBE_NAMESPACE} --docker-server="${CI_REGISTRY}" --docker-username="${CI_DEPLOY_USER}" --docker-password="${CI_DEPLOY_PASSWORD}"
- echo ${IMAGE_TAG}
- for file in k8s/*.yaml; do envsubst < "$file" | kubectl apply -f -; done
when: manual
rules:
- if: '$CI_COMMIT_BRANCH == "master"'
For the deployment to Kubernetes I am using the alpine/k8s images that has many of Kubernetes tooling loaded onto the image. The job is setting some environment and generic variables, the actual script interacts with the connected Kubernetes cluster (configured under Operate->Kubernetes cluster).
It creates a namespace and creates a secret to be able to connect to the GitLab registry that is part of my project.
kubectl create secret docker-registry gitlab-registry-auth -n ${KUBE_NAMESPACE} --docker-server="${CI_REGISTRY}" --docker-username="${CI_DEPLOY_USER}" --docker-password="${CI_DEPLOY_PASSWORD}"
The actual deployment, in this example in a manual way, is executed in this line:
for file in k8s/*.yaml; do envsubst < "$file" | kubectl apply -f -; done
The envsubst command replaces the value of some of the variables available in .gitlab-ci.yml in the Kubernetes YAML files.
The variables that are being used are:
- $FQDN_PRODUCTION = the Fully Qualified Domain that will be configured on my ingress and will make the app available to the world.
- $KUBE_NAMESPACE_PRODUCTION = the namespace I’m deploying to, the value is nodejs-cicd-production for this example.
- $CI_DEPLOY_USER = Unlike the previously used CI_REGISTRY_USER (which is only valid during job execution), CI_DEPLOY_USER is based on a gitlab-deploy-token and remains valid after the job completes. This makes it more suitable for Kubernetes deployments, which may continue running after the GitLab job finishes.
- $CI_DEPLOY_PASSWORD = The CI_DEPLOY_PASSWORD is a different password then the previously used CI_REGISTRY_PASSWORD and also based on the gitlab-deploy-token.
The “when:” manual statement requires an extra user confirmation before the actual deployment to the “production” K8S environment.
Implementing a deployment to a staging environment
Because the script is parameterized, it makes it quite straightforward to add a staging job to the scripts:
deploy-staging:
image: alpine/k8s:1.34.1
stage: deploy
environment:
name: staging
url: http://$FQDN_STAGING
variables:
KUBE_NAMESPACE: $KUBE_NAMESPACE_STAGING
IMAGE_TAG: $CI_REGISTRY_IMAGE:$CI_COMMIT_REF_SLUG-$CI_COMMIT_SHORT_SHA
FQDN: $FQDN_STAGING
script:
- apk add --no-cache gettext
- kubectl config use-context ${KUBE_CONTEXT}
- kubectl create namespace ${KUBE_NAMESPACE} --dry-run=client -o yaml | kubectl apply -f -
- kubectl delete secret gitlab-registry-auth -n ${KUBE_NAMESPACE} || echo "Cannot delete - secret doesn't exist"
- kubectl create secret docker-registry gitlab-registry-auth -n ${KUBE_NAMESPACE} --docker-server="${CI_REGISTRY}" --docker-username="${CI_DEPLOY_USER}" --docker-password="${CI_DEPLOY_PASSWORD}"
- echo ${IMAGE_TAG}
- for file in k8s/*.yaml; do envsubst < "$file" | kubectl apply -f -; done
rules:
- if: '$CI_COMMIT_BRANCH == "master"'
By changing some of the parameters, the “staging version” is deployed to a different namespace on the same Kubernetes cluster.
I hope this provides a bit of context and how to get started with CI/CD in GitLab.
What’s Next?
We’ve built a solid CI/CD foundation, but modern applications need more than just builds and deployments—they need security built in from the start.
In Part 2, we’ll enhance our pipeline with GitLab’s comprehensive security scanning capabilities:
- SAST, DAST, and Secret Detection – Find vulnerabilities before they reach production
- Dependency and Container Scanning – Secure your software supply chain
- IaC Scanning – Catch infrastructure misconfigurations early
- Templates vs. Components – Understanding GitLab’s reusable building blocks
- Custom Stage Integration – Configure scanners to fit your workflow
You’ll learn multiple approaches to implementing security—from manual configuration to policy-based enforcement—and see how to integrate these scanners seamlessly into the pipeline we built in Part 1.
Continue to Part 2: Implement Security Scanning




