ARM build on

Normally builds on are executed on x86, x64.
In some case, you want to run build on other architecture.

Drone now supports arm and arm64 from v0.8.
The ARM is used Raspberry Pi, Mobile Devices such as Android, and IoT devices as well. To introducing CI/CD into the development of embedded device like this is very important.

This article describes introducing arm architecture build on

Introducing ARM agent

Although ARM architecture is now supported by, you need to run drone server on x64. If you have already the server, use it.

If you want to build with ARM, setup agent in ARM environment. Raspberry Pi is an easy way to make agent by installing docker, also qemu virtualization is available. Agent images are provided for arm and arm64 environment. There are tags including linux-arm, linux-arm64. Finally, you need to set DOCKER_ARCH=arm as an environment variable as follows.

ARM .drone.yml

If build starts on this condition, drone server is not to able to detect the architecture, and cannot select the suitable agent to run. To specify the architecture, add the platform to .drone.yml as follows.
You can mix the multi-architecture agents under the one server because the agent will be chosen by drone server.

When build running…

+ uname -a
Linux 39737a2a3d25 4.13.9-300.fc27.armv7hl #1 SMP Mon Oct 23 15:02:20 UTC 2017 armv7l GNU/Linux

on armv7, yeah!

ARM build requires ARM docker images. Officially ARM-based images are provided on docker hub, so you can customize these images.

ARM Plugins

There is a note to use ARM build. ARM build requires ARM version of the plugins. Almost official drone plugins provide support for ARM, however, your own plugin should rebuild image on ARM.

KitchenCI Infrastructure Spec on

Infrastructure test becomes more important. There are several tools for continuous delivery of infrastructure layer.

KitchenCI (test-kitchen) provides a test harness to execute infrastructure code on one or more platforms in isolation.

Although KitchenCI uses Vagrant to operate virtual machines in default, kitchen-docker enables docker driver to operate on docker container.

We can realize automated infrastructure test on by this plugin.

Docker in Docker

Drone runs a test on docker container. Because KitchenCI builds container and provisions Chef recipes into it, What executing KitchenCI on drone means building container in the container. It is called Docker in Docker (dind).


We need a small work to run dind, officially dind images are provided on docker hub. A tag including “dind” is for dind usage.

Firstly, you need to write services section in .drone.yml to enable dind. Port 2375 is docker port. You need to turn on “Trusted” flag on your project because dind image requires the privileged flag.

Because the ChefDK package does not installed kitchen-docker, you must prepare installed docker image. This article uses aberrios85/drone-kitchen.

When using docker socket of dind from other containers, set DOCKER_HOST as follows, turn reference port to what set in the service section.

Kitchen-docker Settings

To use dind docker socket in KitchenCI, change config file .kitchen.yml as follows. Socket section enables to change the docker port.

It will take a long time to create, converge, setup, verify and destroy, however, automated infrastructure test will work well.

Running dind containers without the trusted flag

The trusted flag can be set only administrators, the general user feels pain.
If only reliable users access to the drone, setting the environment variable DRONE_ESCALATE to docker on drone server, drone makes privileged enable automatically. cache strategy after v0.5

Drone provided the standard cache function until v0.4.
Cache function is to use build artifacts which saved in the end of previous build.

It reduce the build time to preserve node_modules for nodejs, bundler gems for ruby.From dron v0.5 disastablished standard cache function, instead provides plugin. Plugin function produces some problems. This article introduces alternative cache function.

Volume Cache Plugin

When you expected to the same behaviour as always, volume-cache plugin is available.

This plugin is able to save the build artifacts on the arbitrary path which is running agent.

Enabling cache is to put the step of cache restore and rebuild between build as follows.

Because this flag is able to set by administrator only, management will be heavy task. Also trusted flag open docker socket to plugins, so the user can mount on any paths on the agent host.

It is the best choice in small scale usage.

S3 Cache Plugin

s3-cache plugin which uses AWS S3 as cache storage is also available.

When your drone is hosted on AWS, this plugin is convenient choice, If not, you can choice S3 compatible storages. Minio is the best for you own hosted drone.

Minio is able to launch on docker immediately, works fine with drone.
In docker-compose, the settings are as follows.

Minio uses 9000 port as default, however the drone v0.8 uses 9000 port for gRPC to communicate with agent.
To resolve port conflict, launch on other node, or listen port 80.

Minio is simple storage for single user. Because of this, the access and secret key is need to share with all users. But this is not going to matter as much for cache storage.

Setting example is as follows, but the url option uses on document for S3 endpoint setting, “endpoint” is required instead of url option.

Default cache path is ///. Basically it generate the cache on each branch, it reduce effectiveness of cache when in workflow creating branch on every feature requests like a Github Flow. If you specify the path option with a fixed value as below, the cache can be used the same artifact every time.

Organization name less then 3 characters

If your drone connected to github and organization name is less than 3 characters, rebuild-cache will fail because the bucket name violates the validation rule of 3 or more characters. (It will deadlock and die on the rebuild step)
In this case, using path option to avoid this problem.


flush is the function to erase old cache.
Since minio can be seen regular files from the host side, if tmpwatch target include the minio mount path, the user do not have to explicitly write flush_cache.


The cache function has been deleted standard features in v0.5 but now users can freely delete rotten caches by the plugin. Try more drone.

Measuring pending tests for OSS v0.4

Number of executable tests at same time are as same as registered number of docker nodes. When exceeds the limit, rest of the test will be delayed. Measuring the load is important, because long task will easily block the other many tests.

$ sqlite3 /var/lib/drone/drone.sqlite "select count(*) from jobs where job_status = 'pending';"

if you want to get more detailed information,

select repo_owner, repo_name, repo_private, build_branch from jobs inner join builds on jobs.job_build_id = builds.build_id inner join repos on builds.build_repo_id = repos.repo_id where job_status = 'pending'

Job status list and schema are available on github.

I created the bot that is watching the pending tests, and notify to the hipchat.

Encrypting secrets in OSS v0.4

OSS works with other services in notifications and deployments.

But once you commit a password or authentication token for other services writes in drone.yml, your sensitive data will be public.
The drone provides “secrets” that encrypting your sensitive data. the official documents describe the way using command line tool, however you can generate the secrets on the Web UI.

Generating secrets

At first, open your repository setting page on the drone, and select the “SECRETS” tab.

Input your secrets under environment node as yaml files like below.


Generate and copy the output text into “.drone.sec” file on the top level of repository.

Refer the secrets

You can refer the secrets using $$ in .drone.yml. If hipchat notification settings

    auth_token: $$HIPCHAT_TOKEN
    room_id_or_name: 'test'
    notify: true

OSS v0.3 behind reverse proxy

OSS drone is a open source platform version of

drone v0.4 has breaking changes. Without converting .drone.yml, all tests must fail. One of mitigation is a parallel run with Name Based Virtual Host.

Apache2 setting sample is below.

<VirtualHost *:443>
  ProxyRequests Off # NO FORWARD PROXY
  ProxyPass /api/stream/ wss://localhost:8080/api/stream/
  ProxyPassReverse /api/stream/ wss://localhost:8080/api/stream/
  ProxyPass / https://localhost:8080/
  ProxyPassReverse / https://localhost:8080
  ProxyPreserveHost On

drone v0.3 does not support the reverse proxy. X-Forwaeded-Host cannot be interpreted. ProxyReserveHost will pass the Host header from the incoming request to the drone. Authentication callback can be return to the proper url (not localhost).

drone uses the websocket. apache2 needs mod_proxy_wstunnel
if ubuntu

sudo a2enmod proxy_wstunnel

Restarting the apache, v0.3 and v0.4 run on same host.