Skip to content

Release: 'Run Saleor with CI/CD'

Pre-release
Pre-release
Compare
Choose a tag to compare
@Chetabahana Chetabahana released this 07 Jul 10:59
· 715 commits to master since this release

CI/CD Solution

The meaning of what so called here as CI/CD or CICD is stand for the combined practices of continuous integration and continuous delivery and/or continuous deployment.

Configuration

To run a CI/CD we will need a trigger to start the process. This trigger to be initiated trough any git action to your source code. The simplest thing is using a tool called cronjob.

Cronjob

The software utility is a time-based job scheduler in computer operating systems. Users that set up and maintain software environments use cron to schedule jobs (commands or shell scripts)

Source

  • Set hourly crontab
    The cronjob run periodically at fixed times, dates, or intervals.
$ crontab -e
# Edit this file to introduce tasks to be run by cron.
# 
# Each task to run has to be defined through a single line
# indicating with different fields when the task will be run
# and what command to run for the task
# 
# To define the time you can provide concrete values for
# minute (m), hour (h), day of month (dom), month (mon),
# and day of week (dow) or use '*' in these fields (for 'any').# 
# Notice that tasks will be started based on the cron's system
# daemon's notion of time and timezones.
# 
# Output of the crontab jobs (including errors) is sent through
# email to the user the crontab file belongs to (unless redirected).
# 
# For example, you can run a backup of all your user accounts
# at 5 a.m every week with:
# 0 5 * * 1 tar -zcf /var/backups/home.tgz /home/
# 
# For more information see the manual pages of crontab(5) and cron(8)
# 
# m h  dom mon dow   command

0 * * * * sh ~/.cronjob/cron.sh
  • cron.sh
    Contain git pull followed by git reset.
    This will completely reset the master branch that was pushed to the fork with the contents of the upstream master repo.
$ cat << EOF > ~/.cronjob/cron.sh
#!/bin/sh

TASK_NAME=Google-Tasks-API
BASE_NAME=Tutorial-Buka-Toko
UPSTREAM=git@github.com:mirumee/saleor.git
TASK_GIT=git@github.com:MarketLeader/$TASK_NAME.git

eval `ssh-agent`
cd ~/.gits/$BASE_NAME
expect ~/.cronjob/agent > /dev/null
git remote set-url upstream $UPSTREAM
git checkout master && git fetch --prune upstream
if [ `git rev-list HEAD...upstream/master --count` -eq 0 ]
then
    echo "all the same, do nothing"
else
    echo "update exist, do checking!"
    git pull --rebase upstream master
    git reset --hard upstream/master
    cd ~/.gits/$TASK_NAME
    push $TASK_GIT
fi
eval `ssh-agent -k`
EOF
$ apt-get update > /dev/null
$ DEBIAN_FRONTEND=noninteractive
$ apt-get install --assume-yes --no-install-recommends apt-utils expect > /dev/null
$ cat << EOF > ~/.cronjob/agent
#!/usr/bin/expect -f
set HOME $env(HOME)
spawn ssh-add $HOME/.ssh/id_rsa
expect "Enter passphrase for $HOME/.ssh/id_rsa:"
send "<my_pashprase>\n";
expect "Identity added: $HOME/.ssh/id_rsa ($HOME/.ssh/id_rsa)"
interact
EOF
$ chmod +x ~/.cronjob/agent
$ chmod 600 $HOME/.ssh/id_rsa
$ sudo ln -s $HOME/.ssh /root/.ssh
  • automate git push
    Usage: $ push <repo_url>
    Set remote git URI to git@github.com rather than https://github.com
$ cat << EOF > ~/.cronjob/push.sh
#!/bin/sh
BRANCH=`git rev-parse --abbrev-ref HEAD`
git remote set-url origin ${1} && git pull origin $BRANCH
sed -i "s/-[0-9]\{1,\}-\([a-zA-Z0-9_]*\)'/-`date +%d%H%M`-cron'/g" cloudbuild.yaml
git status && git add . && git commit -m "cron commit on `date +%Y-%m-%d\ %H:%M`"
git push origin $BRANCH
EOF
$ sudo ln -s ~/.cronjob/push.sh /bin/push
$ sudo chmod +x /bin/push
  • test cron
    Usage $ sh ~/.cronjob/cron.sh
    Below is the output when the source on target is up to date
Agent pid 9373
Already on 'master'
all the same, do nothing
Agent pid 9373 killed

Result

AutoSync is applied uing a cronjob on fork repository Tutorial-Buka-Toko

  • Update notification : "This branch is even with mirumee:master" as shown below:

screencapture-github-MarketLeader-Tutorial-Buka-Toko-2019-06-24-15_43_32

Resources

The cronjob that explained above is made to another repo. So it is not directly update the target one. This is useful when the size of the target source is big enough to update.

Therefore using another repository we can use a cloud build steps to to do that update. See below the comparison on the resource when we compare between direct update and trough a builder.

screencapture-console-cloud-google-compute-instancesDetail-zones-us-central1-c-instances-backend-2019-06-22-13_01_02

And below is the steady stage when the hourly cronjob is on checking the source for an update

screencapture-console-cloud-google-compute-instancesDetail-zones-us-central1-c-instances-backend-2019-06-24-14_45_32

This will help to avoid CPU bursting/burstable cpu throttling. The behavior of shared-core machine types and bursting stated that “f1-micro instances get 0.2 of a vCPU and is allowed to burst up to a full vCPU for short periods. g1-small instances get 0.5 of a vCPU and is allowed to burst up to a full vCPU for short periods.”.

capture

External IP

The above scheme is made through an instance. You may do it privately for a VM Instance or Kubernetes Engine otherwise the following charges will apply for External IP starting January 1st, 2020:
screencapture-mail-google-mail-u-0-2019-08-20-16_57_49

Builder

Once the update happen in upstream then the cronjob above will trigger the update on the forked repository. In case you use it on Google Cloud Build then you may set mirror configuration and manage it with git commit

cloudbuild.yaml

You may want to integrate your private repository in your steps but don't want to expose even the name it self. This is possible when you set IAM role to the Builder. Then you can call it without any credential like below:

steps:
- name: '${_SOURCE}/gcloud'
  entrypoint: 'bash'
  args:
  - '-c'
  - |
        gcloud source repos clone --verbosity=none `gcloud source \
        repos list --limit=1 --format 'value(REPO_NAME)'` .io
        find . -type f -name gcloud.env -exec bash {} $PROJECT_ID \
        $BUILD_ID $REPO_NAME $BRANCH_NAME $TAG_NAME \;
        
- name: '${_SOURCE}/docker'
  entrypoint: 'bash'
  args:
  - '-c'
  - |
        find . -type f -name docker.env -exec bash {} $PROJECT_ID \
        $BUILD_ID $REPO_NAME $BRANCH_NAME $TAG_NAME \;

substitutions:
  _VERSION: 'v1-121615-cron'
  _SOURCE: gcr.io/cloud-builders

timeout: '60s'

Note:

sed -i "s/-[0-9]\{1,\}-\([a-zA-Z0-9_]*\)'/-`date +%d%H%M`-cron'/g" cloudbuild.yaml

Environtment

You can put your variables in env configuration or set like above then call them like this:

export PROJECT_ID=${1}
export BUILD_ID=${2}
export REPO_NAME=${3}
export BRANCH_NAME=${4}
export TAG_NAME=${5}

You can put your other variables in the file or in a separated environment file like this:

MY_VAR1=var1
MY_VAR2=var2
MY_VAR3=var3
...
...

Then call them like this:

while read -r line; do eval export "$line"; done <$PROJECT_ID.env

Googke KMS

To use your SSH key with Cloud Build, you must use a Cloud KMS CryptoKey.
You will need to Enable KMS API then run command as below:

$ KEY_NAME=env_keys
$ KEYRING_NAME=my-keyring
$ gcloud config set project $GOOGLE_CLOUD_PROJECT
$ SERVICE_ACCOUNT=<your cloudbuild service account>
$ gcloud kms keys create $KEY_NAME \
  --location=global --keyring=$KEYRING_NAME --purpose=encryption
$ gcloud kms encrypt --plaintext-file=$HOME/.ssh/$KEY_NAME \
  --ciphertext-file=$HOME/.ssh/$KEY_NAME.enc --key=$KEY_NAME \
  --location=global --keyring=$KEYRING_NAME
$ gcloud kms keys add-iam-policy-binding $KEY_NAME \
  --location=global --keyring=$KEYRING_NAME \
  --member=serviceAccount:$SERVICE_ACCOUNT@cloudbuild.gserviceaccount.com \
  --role=roles/cloudkms.cryptoKeyDecrypter

Check stored key at the kms console and use it as below:

for i in key_1 key_2 key_3; do
  if [ -f $HOME/.ssh/$i.enc ]  
  then
    gcloud kms decrypt \ 
    --keyring my-keyring --key $i \
    --plaintext-file $HOME/.ssh/$i \
    --ciphertext-file $HOME/.ssh/$i.enc \
    --location global
  fi
done	

If you want to change the key file then you need only do the second command

$ KEY_NAME=env_keys
$ KEYRING_NAME=my-keyring
$ gcloud kms encrypt --location=global --keyring=$KEYRING_NAME --key=$KEY_NAME \
  --plaintext-file=$HOME/.ssh/$KEY_NAME --ciphertext-file=$HOME/.ssh/$KEY_NAME.enc \
  --project $GOOGLE_CLOUD_PROJECT --configuration $CLOUDSDK_ACTIVE_CONFIG_NAME

Then replace the enc file with the new one.

IMPORTANT:
Please be careful when using this encrypted/decrypted keys. In case any problems on accidentally exposing a credential then your account might be suspended and to appeal it then you will need to do following action:

  1. Log in to the Google Cloud Console and review the activity on your account.
  2. Revoke all (or listed) credentials for compromised Service Accounts. As every resource accessible to the Service Account may have been affected, it is best to rotate all credentials on potentially affected projects. For more details, review the instructions available here.
  3. Delete all unauthorized VMs or resources if you see any.
  4. Take immediate steps to ensure that your Service Account credentials are not embedded in public source code systems, stored in download directories, or unintentionally shared in other ways.

Hook Listener

When you are working with an image builder such as Docker Hub there are options to set webhooks which are POST requests sent to a URL you define. So you may install captainhook for the hook listener:

$ sudo apt-get update
$ export GOPATH=$HOME/.go
$ sudo apt-get --assume-yes install golang-go
$ export PATH=$PATH:$GOPATH/bin
$ go get github.com/bketelsen/captainhook
$ mkdir -p /etc/captainhook
$ nohup captainhook -listen-addr=0.0.0.0:8080 -echo \
-configdir /etc/captainhook > /dev/null 2>&1 &

Add a script

$ cat << EOF > /etc/captainhook/cron.json
{
    "scripts": [
        {
            "command": "sh",
            "args": [
               "sh ~/.cronjob/cron.sh"
            ]
        }
    ]
}
EOF

Set as a service

$ sudo cat << EOF > /etc/systemd/system/captainhook.service
[Unit]
Description=Captainhook a generic webhook endpoint
Documentation=https://github.com/bketelsen/captainhook
After=network.target

[Service]
ExecStart=/usr/local/bin/captainhook -configdir /etc/captainhook -listen-addr 0.0.0.0:8080 -echo

[Install]
WantedBy=multi-user.target
EOF
$ sudo systemctl start captainhook
$ sudo systemctl enable captainhook
Created symlink /etc/systemd/system/multi-user.target.wants/captainhook.service →
etc/systemd/system/captainhook.service.

Test using curl

$ curl -X POST http://<HOST-IP>:8080/cron
{
  "results": [
    {
      "stdout": "Agent pid 3040\nall the same, do nothing\nAgent pid 3040 killed\n",
      "stderr": "Already on 'master'\n",
      "status_code": 0
    }
  ]
}

Example callback payload:

{
  "state": "success",
  "description": "387 tests PASSED",
  "context": "Continuous integration by Acme CI",
  "target_url": "http://ci.acme.com/results/afd339c1c3d27"
}

Codefresh

In this release we will discuss about CI/CD solution from an online services called Codefresh that offers to automate and simplify everything from code to cloud. You may sign up for the services here.

Scope of the service

It is claimed as the first continuous integration and delivery platform that puts the container image at the center. Empowers teams of all sizes to build & release pipelines faster at scale.

integrate

Integration and Delivery

The Codefresh platform provides a fully automated, continuous deployment workflow from code through automated testing for integration, performance and security of container orchestration.

Below is the figure of continuous integration and delivery platform for Docker and Kubernetes that can be made using the service.

CI-CD

Set Trigger

There are options when you want to initiate CI/CD process on Codepresh by a cronjob base on update that happen on the upstream of your forked repository.

  • Send a trigger from the cron directly to Codefresh.
  • Update the fork using cronjob which trigger the process
  • Let the cron update another repository which trigger the process.

As a matter of security and some integration that been made on the system then I prefer the last option in order to prevent the process is running when the update has not been tested yet.

Once an update is happen on the upstream then the cronjob above will update the $TASK_REPO. So we may set a trigger to conduct a test in Codefresh on each update before we push them to the forked one.

screencapture-g-codefresh-io-pipelines-edit-workflow-2019-07-03-01_20_25

You may also set the trigger using Codefresh API/CLI via curl command like this:

curl 'https://g.codefresh.io/api/builds/5b1a78d1bdbf074c8a9b3458' \
--compressed -H 'content-type:application/json; charset=utf-8' \
-H 'Authorization: <your_key_here>' \
--data-binary '{"serviceId":"5b1a78d1bdbf074c8a9b3458",\
"type":"build","repoOwner":"kostis-codefresh",\
"branch":"master","repoName":"nestjs-example",\
"variables":{"sample-var1":"sample1","SAMPLE_VAR2":"SAMPLE2"}}'

Variables

You can put your own variables in the Codefresh project such as:
GITHUB_USER, USER_EMAIL, WORKSPACE

It is also possible to use an SSH_KEY in a file:

# Add your SSH key as an encrypted environment variable after processing it with tr:
$ cat ~/.ssh/my_ssh_key_file | tr '\n' ','

# Then in the pipeline use it like this:
echo "${SSH_KEY}" | tr \',\' '\n' > ~/.ssh/id_rsa

You may also set any other variable keys including kms-decrypt of google-kms or transfer them from a key file:

screencapture-g-codefresh-io-projects-saleor-edit-variables-2019-07-07-17_41_55

Note that the profile of the repo that set as git trigger will be automatically filled by Codefresh on the variables prefixed by CF values. See the variables page for more details.

codefresh.yml

Your test code doesn't need to be in the repo that made the git trigger. As long an account on git-providers or external-docker-registries are integrated then you or team may bring a private repo and images.

For example by using the token in codefresh.yml it can let python to clone a private repo called $WORKSPACE, then execute pipenv scripts in a new forced branch named same as ${CF_BUILD_INITIATOR} and push it back to the forked repo that made the trigger. Thus by this scenario another process can be initiated further.

Codefresh creates a shared volume in each pipeline that is automatically shared on all freestyle steps. This volume exists at /codefresh/volume by default. You can simply copy files there to have them available in all Codefresh steps (as well as subsequent builds of the same pipeline) like below:

version: '1.0'
stages:
  - stage1
steps:
  read_cf:
    title: 'cf info build'
    image: codefresh/cli
    commands:
      - REPO_TASK=Cloud-Tasks-API
      - rm -rf .io .root $REPO_TASK $CF_REPO_NAME
      - mkdir .root && cf_export ROOT=$(realpath .root)
      - cf_export INIT=$REPO_TASK/.google/cloud/builders/__init__ 
      - git clone https://github.com/MarketLeader/$REPO_TASK.git    

  read_gcloud:
    title: 'gcloud build'
    image: gcr.io/cloud-builders/gcloud
    commands:
      - bash $INIT gcloud && cp -frpT $HOME $ROOT

  main_clone:
    title: 'python test'
    stage: stage1
    image: python:latest
    commands:
      - cp -frpT $ROOT $HOME && bash $INIT python

The code above is freestyle type. You can use in a freestyle step any Docker image. There also plenty of plugin including webhook or even an encrypted pre-existing key on import-job using kms-plugin. This makes the integration of Codefresh and various cloud tools very easy.

In case you want to test the code then you can install Codefresh CLI in your local pc. You may follow how to install in Cygwin using the installer or the package.

Report

Below is a sample of log report on Codefresh. This report is made for a short process. For a long process you may view screenshot or download the image attached (4.43MB).

screencapture-file-C-Users-Chetabahana-Desktop-5d0c72d0f48541464ac115d0-html-2019-06-22-13_10_45

Explore

The steps explained above is made inside a service called pipeline. A project in Codefresh can be set for many pipelines where each pipelines can contains many stages of steps like shown below.

konfigurasi

Scenario

You may explore what step of CI/CD that to be run in Codefresh. It is possible to set all steps from source to cloude in Codefresh but with one and other reasons this project is taking the following scenario.

From Source to Build

  1. Cronjob as explained above is set on update to push a task repo. (Let's call it REPO-TASK-1).
  2. Step on REPO-TASK-1 is set to push update from upstream to the forked repo (REPO-FORK-1: master).
  3. REPO-FORK-1: master is set on update as git trigger to a pipeline on Codefresh (PIPELINE-1).
  4. PIPELINE-1 will take a private repo REPO-CODE-1 as the base to clone and test REPO-FORK-1: master.
  5. When pass the test PIPELINE-1 is set to push the update to a branch (REPO-FORK-1: branch).
  6. Step on REPO-FORK-1: branch is set on update to rebase another forked repo (REPO-USER-1: master).
  7. REPO-USER-1: master is set as base for REPO-USER-1: branch. Which branch name is dynamic.
  8. REPO-USER-1: branch is set on update to trigger its automate image build on Docker Hub.

From Build to Cloude

  1. Docker Hub is set to send a webhook when the build image push to the registry.
  2. Captainhook that currently running on the server is set as hook listener from Docker Hub.
  3. The task on Captainhook is pulled to trigger an update on another task repo (REPO-TASK-2).
  4. Update on REPO-TASK-2 is set as git trigger for another pipeline in Codefresh (PIPLINE-2).
  5. PIPLINE-2 will take REPO-TASK-2 as the base to pull and test the new image from Docker Hub.
  6. When pass the test PIPELINE-2 is set to push another user repo REPO-USER-2 (also set as private).
  7. REPO-USER-2 will then conduct a test of compose, once passed deploy it for production.

The code on above scenario are available in this release but still premature.
You may check whether it is going to be in function on your side.

Workflow

Later on you may need to integrate your projects in Codefresh with your projects which are developed in another services like GitHub Enterprise, Google Cloud Platform, Jenkins etc. You can set a Cronjob workflow trigger in GitHub Actions that contains many projects where one or more projects are conducted in Codefresh.

screencapture-github-actions-example-gcloud-blob-master-github-main-workflow-2019-06-22-13_33_54

Module

Last but not least, this project will be made with several modules that are integrated each other as shown below. Each module may have many of such workflow above. We will discuss it in detail on further releases.

flow_diagram