Amplify mock can’t mock the REST API

When you add the API to amplify app, using amplify add api command, GraphQL or REST API will be add. For local environment test, you can use amplify mock api command.
However, this command is available only for GraphQL API.

When you mock the API without adding GraphQL API, the command return the error as shown below.

Failed to start API Mocking. Running cleanup tasks.
TypeError: Cannot read property 'stop' of undefined
    at APITest.stop (/snapshot/repo/build/node_modules/amplify-util-mock/lib/api/api.js:187:33)
    at APITest.start (/snapshot/repo/build/node_modules/amplify-util-mock/lib/api/api.js:150:18)
    at processTicksAndRejections (internal/process/task_queues.js:95:5)
    at async start (/snapshot/repo/build/node_modules/amplify-util-mock/lib/api/index.js:18:5)
    at async Object.run (/snapshot/repo/build/node_modules/amplify-util-mock/lib/commands/mock/api.js:21:5)
    at async Object.executeAmplifyCommand (/snapshot/repo/build/node_modules/amplify-util-mock/lib/amplify-plugin-index.js:47:3)
    at async executePluginModuleCommand (/snapshot/repo/build/node_modules/@aws-amplify/cli-internal/lib/execution-manager.js:142:5)
    at async executeCommand (/snapshot/repo/build/node_modules/@aws-amplify/cli-internal/lib/execution-manager.js:40:9)
    at async Object.run (/snapshot/repo/build/node_modules/@aws-amplify/cli-internal/lib/index.js:117:5)

This error does not describe the actual situation. This is because the mock clean up fails when the no appsync api exists error occurs. For now, I have submitted a Pull Request to show the correct error. I’m sure it will be fixed soon.

Advertisement

Cognito ID Pool always fail if the ID provider URL given with / on web identity federation of Auth0.

Summary

  • If the Provider URL of identity provider is ending with /, Issuer validation of Cognito ID pool always fail.
  • When the temporary security credentials are created by OpenID Connect (OIDC) ID provider, you need to add separate identity providers for Cognito ID Pool and STS. even if same provider.

Issuing temporary security credentials in IAM for authenticated by OIDC provider.

There are 2 ways of issuing temporary security credentials in IAM for authenticated user by Auth0, using Cognito ID Pool or STS (Security Token Service). When Cognito ID Pool is issuing, the process of credentials issue is almost same, because the Cognito uses STS inside. However, the verification of ID token has difference.

In case of Cognito ID Pool

This article uses a sample code that federated with OpenID Connect (Auth0). First you need to add an identity provider on IAM Identity provider page like below.

  • Provider type: OpenID Connect
  • Provider URL: Copy the domain of Auth0 application.
  • Audience: Copy of Client ID of Auth0.

Do not ended with / for Provider URL.

Next step is to enable the ID provider on the Cognito ID Provider page by creating id pool or using existed pool.

To obtain temporary security credentials from Cognito ID Pool, follow the Identity pools (federated identities) authentication flow. JavaScript sample code is like below.

const client = new CognitoIdentityClient({ region: "us-east-1" });
const getIdCommand = new GetIdCommand({
  IdentityPoolId: "us-east-1:XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX",
  Logins: {
    "AUTH0_DOMAIN.auth0.com": idToken,
  },
});
const cognitoId = await client.send(getIdCommand);
const getCredentialsForIdentityCommand = new GetCredentialsForIdentityCommand(
  {
    IdentityId: cognitoId.IdentityId,
    Logins: {
      "AUTH0_DOMAIN.auth0.com": idToken,
    },
  }
);
const awsCredentials = await client.send(getCredentialsForIdentityCommand);

awsCredentials has a temporary security credentials to access the AWS resources. You can access the AWS resources by the assumed role of authenticated user.

In case of using STS directly

To obtain the temporary security credentials from STS directly, use the AssumeRoleWithWebIdentity.

Without using Cognito ID pool like this, there is no role to assume, you need to follow the Creating a role for web identity or OIDC to create a role.

JavaScript sample code is like below.

const client = new STSClient({ region: "us-east-1" });
const command = new AssumeRoleWithWebIdentityCommand({
  RoleArn: "arn:aws:iam::XXXXXXXXXXXX:role/Auth0SampleRole",
  RoleSessionName: "Auth0AssumeRoleSession",
  WebIdentityToken: idToken,
});

But the Identity provider settings are also required, you must be all right because Cognito ID Pool has been configured already in the previous section.

Please execute this API, you must get following error.

{
    "errorType": "InvalidIdentityTokenException",
    "errorMessage": "No OpenIDConnect provider found in your account for https://AUTH0_DOMAIN.auth0.com/",
    "name": "InvalidIdentityTokenException",
    "$fault": "client",
    "$metadata": {
        "httpStatusCode": 400,
        "requestId": "XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX",
        "attempts": 1,
        "totalRetryDelay": 0
    },
    "Error": {
        "Type": "Sender",
        "Code": "InvalidIdentityToken",
        "Message": "No OpenIDConnect provider found in your account for https://AUTH0_DOMAIN.auth0.com/",
        "message": "No OpenIDConnect provider found in your account for https://AUTH0_DOMAIN.auth0.com/"
    },
    ...snip...  
}

This error says the API couldn’t find the OpenID Connect provider. This is strange because the identity provider was configured despite working with Cognito ID Pool correctly.

Answer

In fact, you need to add separate identity provider from Cognito ID Pool. Please try to add identity provider as follows. Don’t forget to add / at the end of provider url.

The role is needed to edit because the identity provider is configured. If this does not work, you may recreate the role.

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Principal": {
                "Federated": "arn:aws:iam:XXXXXXXXXXXX:oidc-provider/dev-AUTH0_DOMAIN.auth0.com/"
            },
            "Action": "sts:AssumeRoleWithWebIdentity",
            "Condition": {
                "StringEquals": {
                    "AUTH0_DOMAIN.auth0.com/:aud": "AUTH0_CLIENT_ID"
                }
            }
        }
    ]
}

You can get a success response.

What do you guess If the provider url ended with / using Cognito ID Pool?

{
    "errorType": "NotAuthorizedException",
    "errorMessage": "Token is not from a supported provider of this identity pool.",
    "name": "NotAuthorizedException",
    "$fault": "client",
    "$metadata": {
        "httpStatusCode": 400,
        "requestId": "XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX",
        "attempts": 1,
        "totalRetryDelay": 0
    },
    "__type": "NotAuthorizedException",
    ...snip...
}

“Token is not from a supported provider of this identity pool.” seems to say that the provider’s name of Authentication providers and Logins option of GetId are different. Please change the code to add / as follows and run it.

const getIdCommand = new GetIdCommand({
  IdentityPoolId: "us-east-1:XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX",
  Logins: {
-    "AUTH0_DOMAIN.auth0.com": idToken,
+    "AUTH0_DOMAIN.auth0.com/": idToken,
  },
});

const getCredentialsForIclient. Sendand = new GetCredentialsForIdentityCommand(
  {
    IdentityId: cognitoId.IdentityId,
    Logins: {
-    "AUTH0_DOMAIN.auth0.com": idToken,
+    "AUTH0_DOMAIN.auth0.com/": idToken,
    },
  }
);

You’ll get this error.

{
    "errorType": "NotAuthorizedException",
    "errorMessage": "Invalid login token. Issuer doesn't match providerName",
    "name": "NotAuthorizedException",
    "$fault": "client",
    "$metadata": {
        "httpStatusCode": 400,
        "requestId": "XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX",
        "attempts": 1,
        "totalRetryDelay": 0
    },
    "__type": "NotAuthorizedException",
    ...snip...
}

Now the Cognito ID Pool could find the identity provider, but it mismatches with issuer of ID token due to add /. ID token issued by Auth0 contains iss claim as follows. The iss claim contains /.

{
  "iss": "https://AUTH0_DOMAIN.auth0.com/",
}

I think the Cognito ID Pool compare with https://${Identity provider domain}/ and iss claim of ID token. When you add / at the end of provider url, duplicated / prevent the API from verification. Maybe this behavior will change again for ID providers that do not have / in the iss claim.

If you meet those errors, you can resolve the problems with this article’s information.

AWS STS is NOT able to accept the access token issued by Auth0 to create temporary security credentials.

Summary

When AssumeRoleWithWebIdentity on AWS STS (Security Token Service), STS can NOT validate the aud claim of access token issued by Auth0.

The solutions are, using ID token or Lambda Authorizer.

How to Reproduce

This post shows a sample API that responds to the result using the AWS resources with Auth0 authentication.

Building a frontend

You need to build a frontend app and configure Auth0 by reading the Auth0 Quickstarts.

Please refer to Auth0 React SDK Quickstarts: Call an API to call an API from the frontend app.

Configure Auth0 API

To open the API for an authenticated user of Auth0, create the API on the Auth0 dashboard.

API settings on Auth0 dashboard

You can use Identifier as you want, the Auth0 recommends API endpoint is preferred to use it. This article uses https://example.com/ as Identifier.

Configure the AWS

Setting Identity providers

After this step, this app will pass the access token issued by Auth0 to the AWS STS. Therefore, you need to add an Auth0 as an identity provider on AWS refer to Creating IAM identity providers.

Go to the “Identity providers” on IAM console, add provider as below.

IAM identity provider settings
  • Provider type: Open ID Connect
  • Provider URL: Copy the domain from Auth0 dashboard.
  • Audience: Copy the API Audience from Auth0 dashboard.

Creating a role to delegate permissions to an AWS service

On the identity providers page, push “Assign role” to create a new role. You need to assign the policies for the AWS resources you want to use. See Creating a role for web identity or OIDC.

Sample trust relationships is as below.

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Principal": {
                "Federated": "arn:aws:iam::XXXXXXXXXXXX:oidc-provider/YOUR_AUTH0_DOMAIN.auth0.com"
            },
            "Action": "sts:AssumeRoleWithWebIdentity",
            "Condition": {
                "StringEquals": {
                    "YOUR_AUTH0_DOMAIN.auth0.com:aud": "https://example.com/"
                }
            }
        }
    ]
}

Creating API

When creating API on lambda, you can access the AWS resources by attaching the policy.

But on your server or GCP cloud functions, you can’t do it.

In this case, there is a way to access the AWS resources via the access key issued by STS.

This is an example of accessing AWS resources. When requested, this API receives access token, API requests to exchange the access token for AWS temporary credentials.

const jose = require("jose");
const {
  STSClient,
  AssumeRoleWithWebIdentityCommand,
} = require("@aws-sdk/client-sts");

exports.handler = async (event, context) => {
  const accessToken = event.headers.authorization.split(" ")[1];
  console.log("[Access Token] ", accessToken);

  const JWKS = jose.createRemoteJWKSet(
    new URL("https://YOUR_AUTH0_DOMAIN.auth0.com/.well-known/jwks.json")
  );

  // Verifying access token
  const { payload, protectedHeader } = await jose.jwtVerify(accessToken, JWKS, {
    issuer: "https://YOUR_AUTH0_DOMAIN.auth0.com/",
    audience: "https://example.com/",
  });
  console.log("[Protected Header] ", protectedHeader);
  console.log("[Payload] ", payload);

  // Requesting temporary security credentials from STS
  const client = new STSClient({ region: "us-east-1" });
  const command = new AssumeRoleWithWebIdentityCommand({
    RoleArn: "arn:aws:iam::XXXXXXXXXXXX:role/Auth0SampleRole",
    RoleSessionName: "Auth0AssumeRoleSession",
    WebIdentityToken: accessToken,
  });

  try {
    const awsCredentials = await client.send(command);
    console.log("[STS Credentials] ", awsCredentials);

    // DO Something with AWS Resource
  } catch (error) {
    // error handling.
    console.log("[STS Error] ", error);
  }

  return {};
};

Error will occur

When this API is executed, the error like below will occur.

InvalidIdentityTokenException: Incorrect token audience
{
  '$fault': 'client',
  '$metadata': {
    httpStatusCode: 400,
    requestId: 'XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX',
    extendedRequestId: undefined,
    cfId: undefined,
    attempts: 1,
    totalRetryDelay: 0
  },
  Error: {
    Type: 'Sender',
    Code: 'InvalidIdentityToken',
    Message: 'Incorrect token audience',
    message: 'Incorrect token audience'
  },
  RequestId: 'XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX',
  xmlns: 'https://sts.amazonaws.com/doc/2011-06-15/'
}

“Incorrect token audience” means aud claim of access token and audience of identity provider does not match.

Why does this error occur?

You can see the data what the access token contains in jwt.io. The access token issued by Auth0 has aud claim like that.

"aud": [
  "https://example.com/",
  "https://YOUR_AUTH0_DOMAIN.auth0.com/userinfo"
],

The aud claim identifies the recipients that the JWT is intended for. When AssumeRoleWithWebIdentity, STS verifies that matching between “aud claim of access token” and “audience of identity provider”.

In general case, when the API responses “Incorrect token audience” message, you need to check the 2 parameters are same. But this sample’s audience parameter is contained in the aud claims correctly.

In fact, STS will always fail validation if the aud token is specified as an array.

The aud claim can contain an array of strings or a single string value defined in RFC 7519. However, STS accepts single string only. When using ID token instead of access token, you can see that STS accepts its.

I contacted AWS support and was told that STS cannot accept an array of aud claims. This behavior is also same for Cognito ID pools.

Alternative Solutions

However, you may not face this situation normally, I show you alternative solutions.

Using ID Token

If your API is public within same domain of frontend app, using ID token is suitable to use. Because the ID token issued by Auth0 contains single string aud claim, STS can accept the token.

Using Lambda Authorizer

In other ways, you can use lambda authorizer to verify the access token yourself.

Secure AWS API Gateway Endpoints Using Custom Authorizers

LDAP Auth on the Mastodon

Mastodon uses the local database as the authentication method.

However, under some circumstances, you can use external authentication providers. At first release, LDAP auth has some problems. But now it works fine. I think few people needs ldap auth, but there are no documents to setup LDAP auth.

This blog post describes the setup of ldap authentication.

Setup LDAP auth

Setting values are placed on the .env.production.sample. This post uses Mastodon v3.1.4.

# LDAP authentication (optional)
# LDAP_ENABLED=true
# LDAP_HOST=localhost
# LDAP_PORT=389
# LDAP_METHOD=simple_tls
# LDAP_BASE=
# LDAP_BIND_DN=
# LDAP_PASSWORD=
# LDAP_UID=cn
# LDAP_MAIL=mail
# LDAP_SEARCH_FILTER=(|(%{uid}=%{email})(%{mail}=%{email}))
# LDAP_UID_CONVERSION_ENABLED=true
# LDAP_UID_CONVERSION_SEARCH=., -
# LDAP_UID_CONVERSION_REPLACE=_

LDAP_METHOD

Only simple_tls or start_tls is acceptable.

LDAP_TLS_NO_VERIFY is hidden parameter for using no SSL/TLS encrypted ldap connection.

LDAP_BASE

Base DN

LDAP_BIND_DN

Bind DN here. Anonymous bind is unsupported.

LDAP_UID

LDAP attribute for username.

LDAP_MAIL

LDAP attribute for email address.

LDAP_SEARCH_FILTER

You can set the ldap search filter for matching the ldap user. LDAP search filter’s syntax are defined at RFC 4515.

Sample setting is as follows.

LDAP_UID=cn 
LDAP_MAIL=mail
LDAP_SEARCH_FILTER=(|(%{uid}=%{email})(%{mail}=%{email}))

It will replace %{uid} with LDAP_UID%{mail} with LDAP_MAIL. %{email} will be user input email address.

When the user logged in with above default LDAP setting,

(|(cn=%{hoge@example.com})(email=hoge@example.com))

will be created as the search filter.

LDAP_UID_CONVERSION_ENABLED

Mastodon has a character type limitation for username. Following characters are allowed.

/[a-z0-9_]+([a-z0-9_\.-]+[a-z0-9_]+)?/i

In this case, some characters which prohibited in the Mastodon may appear in the LDAP.

LDAP_UID_CONVERSION_SEARCH is the characters that exists in LDAP but prohibited in the Mastodon. LDAP_UID_CONVERSION_REPLACE is a replaced character from prohibited characters on the mastodon.

For this sample settings, ., – in tha LDAP username will be replaced with _ . For instance, ash-phy user exists on the LDAP, he will be ash_phy on the Mastodon.

How does the Mastodon change after enabled LDAP?

New registration is no needed.

When the new user logged in the Mastodon, user account will be create on the Mastodon immediately.Even if the administrator set the Registrations mode to Nobady can sign up, new ldap user can sign up the Mastodon. Confirmation mail will not send at this this.

When LDAP enabling on the running server.

When enabling the LDAP auth on the running Mastodon server, local and LDAP will be active at the same time. At first Mastodon will try to authenticate with LDAP. If fail, will try to local authentication. Already existing user can use the both passwords LDAP and local. After enabling LDAP, the Mastodon server can create and use local user.

Walking on Milford Track: the finest walk in the world

Mackinnon Pass Memorial 1146m

Milford Track is a 53.5km trekking course in the Fiordland National Park of South Island at New Zealand. This walk is rated that “The finest walk in the world”. The steep valley where glaciers have been scraped is one of the best views.

Milford Track

Milford track is restricted by 90 people in a day and you can not walk freely. After booking all the mountain huts in advance, you will have to walk the scheduled course. This is a very popular course, but it’s very quietly.

To join this trekking, there is a way to participate in Guided Walk conducted by Ultimate Hikes, or arrange a public hut and make it your own personal walk (Individual Walk). This time I participated in a guide walk with 5 days from November 25, 2017.

Day 1: Queens Town to Grade House

There is no big town near Milford track, so the base town is a Queenstown a little away. Queenstown is a small town, but it is the best place to prepare for various activities, because it has various outdoor shops and.

Guided walk arrange the shuttle from Queenstown, and we will first take to Te Anau over about 5 hours.

Actually this Milford track is blocked by high mountains and it is not easy to access the mountain trail. So We went the place called Te Anau Downs, and took a boat from there and crossed Lake Te Anau for about an hour.

DSC09523

By the way, you can access by the pass called Door Pass, but it is difficult to go through.

DSC09567

Just about 20 minutes from here we arrive at the first accommodation “grade house”.

The first Lodge GRADE HOUSE

There is a place to stay at the guided walk and the personal walk, and with the guided walk you can stay at the private lodge owned by Ultimate Hikes. This private lodge provides a 3 course dinner, a shower, a powerful drying room, and also a private room depending on the plan can be selected!

The mountain huts for individual walkers is a self-catering hut. There are some staff, but only bunk beds, self-catering facilities, and toilet. The first lodge of the individual walk is placed approximately 1 hour ahead of the guide walk lodge.

Nature Walk

On the first day, the guide take you to a tour called Nature walk around the lodge. The guide will explain the surrounding vegetation and birds.

DSC09594

Fiordland National Park is known as a region with very high precipitation. Because it is such a land, you can see very beautiful moss.

Beautiful Mosses

Grade House

The private lobby has this spacious lobby so you can relax after arrival.

GRADE HOUSE lounge

The best attraction of this Guided Tours will be dinners.

antipasto

It is served in the course has appetizer, main, dessert. The main can be chosen from three types everyday, you can choose from beef, chicken or fish, vegetarian meal.

I chose venison on this day.

venison

Day2: Grade House to Pompolona Lodge

Making a Lunch

Everyone makes sandwiches as a lunch before breakfast. We can also bring fruits such as chocolate, nuts, bananas and apples, so we don’t need to prepare climbing foods.

Lunch Making

Clinton River

Going along the Clinton River.

Clinton River

On the second day, I walk about 16km, but it is the easiest way. Like this picture the road is very well maintained, there are no incompleteness of signboard, small bridge to big bridge.

Because it is a nearly flat road, you can walk slowly.

DSC09653

Up to 50 people can participate in the guided walk with 4 guides. Not like walking in a single row like Japan, two guides at the beginning and end and two guides in the middle come and go in the row, allowing participants to walk freely at their own pace I will. One guy was a Japanese one of the guides.

It is a good place for this course to be able to walk while enjoying slowly because there are few dangerous places on the road without worrying at all.

DSC09662

Speaking of New Zealand, you may remember Manuka Honey. The Manuka tree is also growing here on the Milford truck.

Manuka

Milford truck has many fallen trees. The glacier scooped land seems to be too steep soils and the trees which have grown to a certain extent are falling down. Moss grows on fallen trees, and new buds come out and grow.

DSC09689

Milford truck is a course to walk through such a valley. The surrounding mountains have a height of about 2000m, but until 10 thousand years ago all this was under the glacier.

DSC09803

In the process of glaciers flowing over the ocean over a long time, the valley was shaved by 2,000m, and the terrain like this was created. As you pass through the forest, it seems that the valley is looming.

Swimming Pool

The last feature of this day is Lake Prairie. It was named Swimming Pool and was able to swim, but this day is 22℃ during the day. It is hot to walk, but it is too cold to swim. Everyone was taking off my shoes and was using it like a footbath and it was very cool.

Swimming Pool

Bus Stop

It is a little more to the accommodation Pompolona Lodge this day, but a bus stop suddenly appears.

The Bus Stop that never comes

Actually when it has heavy rain, we are not able to across the river ahead.In case of this, this bus stop will be the rain shelter.If it rains so hard that it can not get through for a long time, it might be use helicopter.

DSC09831

Pompolona Lodge

This lodge, has a lots of special.

Ponpolona Lodge

The scones of the original recipe inherited from the time it was built 100 years ago will welcome you.

Ponpolona Lodge Original Recipe

I chose coconut curry for today’s dinner.

DSC09874

Dessert was a delicious Crème Brûlée.

Crème brûlée

Kia which is kind of parrot likes playing with human. The placed climbing boots are a toy for Kia. Kia is often coming to play in a dinner time, will not escape if we approaching him.

Kea, a mischievous parrot

You can also can also order Egg Benedict as a special Breakfast.

Day 3: Pompolona Lodge to Quintin Lodge

The third day is the 15 km road beyond Mckinnon Pass which is the highest point of this Milford track, it will be the hardest day on the track. It will be calculated to ascent 700m and descend about 900m.

Weka. a bird can not fly

Because there were no natural enemies, you can see many birds that can not fly such as Kiwi and Weka which are the national birds of New Zealand.

Weka, a bird can't fly

Birds here have no caution, do not run away at all. It is perfect for wild bird observation

South Island robin

Mackinnon Pass

The highest point of the Milford truck is this Mckinnon Pass. Although there is only 1,140m here, the forest limit is around 7 or 800m and it is a place where the scenery is outstanding. It is a place should be sunny.

DSC00267

Mackinnon Pass Memorial 1146m

When I was taking photos, Kia came to cut in the frame.

DSC00344

We have a lunch here on the third day. The photo is the toilet with the best view in the world.

The toilet with the best beautiful view in the world

Emergency Track

Milford track has a lot of avalanche. The sharp ground which glaciers have scooped will cause an avalanche everywhere. The track opens in November, but until around the beginning of December, some courses may be closed due to avalanche alert.

DSC00408

Emergency truck is old road. It is shorter than the new road but it is steep grade and it is attention to your knee.

DSC00496

By the way, the participants seemed to have hit a tour from Japan and it was composed of more than half Japanese. Normally, Japanese are 1 and 2 out of 50 people.

DSC00617

There were four couples who came on honeymoon trips. it’s recommended.

Sutherland Falls

When we arrive at the Quintin Lodge, left luggage to the lodge and go to see the Sutherland Falls. The trail continues from the back of Quintin Lodge.

DSC00537

It is biggest falls in the New Zealand which have the height difference 580m.
It seems that this day is sunny and the amount of water is small, we could bathe a much water that is impossible to get out a camera from the backpack.

Sutherland Falls

Quintin Lodge

Place for a dinner. There is a piano every lodge. Is this really a mountain hut?

Quintin Lodge

Today is a steak (it was a bit hard).

DSC00575

Hot drinks are all you can drink, black tea, green tea, miro, coffee and espresso are prepared. Cold drinks such as coke and sprite and alcohol such as wine and beer are charged. Wide variety of wines are not so expensive.

Day 4: Quintin Lodge to Mitre Peak

Finally the last day of walking. On the final day we will walk 21km, but it is almost flat so don’t worry.

DSC00647

DSC09373

Deep forest, deep valley, alternately show various facial expressions. I found a weka with a child here.

DSC09428

This is the Southern Hemisphere. It is attention to ultraviolet rays.

DSC09467

Giant gate waterfall at lunch point. It feels so good that I will stay long.

DSC09482

Last 1 mile sign has come, I feel a sad to apart this track so I walk slowly.

DSC09519

It is the end of this track, sandfly point. It is 33.5miles and 35.5km from the beginning.

Sandfly Point is the end of this journey

Sandfly is a similar to mosquito, this sandflies are flying a lot in the Milford track, and the damage when biting is enormous. As Maori’s legend has it, It seems that God made it so that people do not stay long in this wonderful place.

Take the boat at the sandfly point and head to the last lodge Mitre peak lodge.

Mitre Peak Lodge

In the end of journey, we head to Mitre Peak Lodge, the only accommodation in Milford Sound. Mitre peak lodge is a private lodge dedicated to this tour and can not stay unless you participate in guided walk. Milford Sound is a very famous sightseeing spot and many tourists visit, but this is the only special lodge to stay while watching this views.

Paradise Duck

Awesome View.

Awesome view from Mitre Peak Lodge

DSC09561

The last day lamb meat. It was very soft and tasty.

DSC09565

Day 5: Mitre Peak and back to Queens Town

Milford Sound Cruising

There is no walk on the last day. Cruising about 2 hours at Milford Sound is included in the tour.

Milford Sound is a very valuable place to see wild penguins, seals.

Mitre Peak

Bye

If I could, I wanted to walk more, it was a wonderful tramping.

ARM build on drone.io

Normally builds on drone.io are executed on x86, x64.
In some case, you want to run build on other architecture.

Drone now supports arm and arm64 from v0.8.
The ARM is used Raspberry Pi, Mobile Devices such as Android, and IoT devices as well. To introducing CI/CD into the development of embedded device like this is very important.

This article describes introducing arm architecture build on drone.io.

Introducing ARM agent

Although ARM architecture is now supported by drone.io, you need to run drone server on x64. If you have already the server, use it.

If you want to build with ARM, setup agent in ARM environment. Raspberry Pi is an easy way to make agent by installing docker, also qemu virtualization is available. Agent images are provided for arm and arm64 environment. There are tags including linux-arm, linux-arm64. Finally, you need to set DOCKER_ARCH=arm as an environment variable as follows.


version: '2'
services:
drone-agent:
image: drone/agent:linux-arm
command: agent
restart: always
volumes:
/var/run/docker.sock:/var/run/docker.sock
environment:
DRONE_SERVER=127.0.0.1:9000
DRONE_SECRET=00e40bfb287a0a553c80297a
DOCKER_ARCH=arm

ARM .drone.yml

If build starts on this condition, drone server is not to able to detect the architecture, and cannot select the suitable agent to run. To specify the architecture, add the platform to .drone.yml as follows.
You can mix the multi-architecture agents under the one server because the agent will be chosen by drone server.


platform: linux/arm
pipeline:
build:
image: arm32v7/busybox:latest
commands:
uname -a

view raw

.drone.yml

hosted with ❤ by GitHub

When build running…

+ uname -a
Linux 39737a2a3d25 4.13.9-300.fc27.armv7hl #1 SMP Mon Oct 23 15:02:20 UTC 2017 armv7l GNU/Linux

on armv7, yeah!

ARM build requires ARM docker images. Officially ARM-based images are provided on docker hub, so you can customize these images.

ARM Plugins

There is a note to use ARM build. ARM build requires ARM version of the plugins. Almost official drone plugins provide support for ARM, however, your own plugin should rebuild image on ARM.

KitchenCI Infrastructure Spec on drone.io

Infrastructure test becomes more important. There are several tools for continuous delivery of infrastructure layer.

KitchenCI (test-kitchen) provides a test harness to execute infrastructure code on one or more platforms in isolation.

Although KitchenCI uses Vagrant to operate virtual machines in default, kitchen-docker enables docker driver to operate on docker container.

We can realize automated infrastructure test on drone.io by this plugin.

Docker in Docker

Drone runs a test on docker container. Because KitchenCI builds container and provisions Chef recipes into it, What executing KitchenCI on drone means building container in the container. It is called Docker in Docker (dind).

test-kitchen-on-drone.png

We need a small work to run dind, officially dind images are provided on docker hub. A tag including “dind” is for dind usage.

Firstly, you need to write services section in .drone.yml to enable dind. Port 2375 is docker port. You need to turn on “Trusted” flag on your project because dind image requires the privileged flag.


pipeline:
build:
image: aberrios85/drone-kitchen
commands:
kitchen test
services:
docker:
image: plugins/docker
image: docker:1.12-dind
privileged: true
command: [ "–storage-driver=vfs", "–tls=false" ]

view raw

.drone.yml

hosted with ❤ by GitHub

Because the ChefDK package does not installed kitchen-docker, you must prepare installed docker image. This article uses aberrios85/drone-kitchen.

When using docker socket of dind from other containers, set DOCKER_HOST as follows, turn reference port to what set in the service section.


pipeline:
build:
image: docker:latest
environment:
DOCKER_HOST=tcp://docker:2375
commands:
docker info

view raw

.drone.yml

hosted with ❤ by GitHub

Kitchen-docker Settings

To use dind docker socket in KitchenCI, change config file .kitchen.yml as follows. Socket section enables to change the docker port.


driver:
name: docker
socket: tcp://docker:2375

view raw

.kitchen.yml

hosted with ❤ by GitHub

It will take a long time to create, converge, setup, verify and destroy, however, automated infrastructure test will work well.

Running dind containers without the trusted flag

The trusted flag can be set only administrators, the general user feels pain.
If only reliable users access to the drone, setting the environment variable DRONE_ESCALATE to docker on drone server, drone makes privileged enable automatically.

Drone.io cache strategy after v0.5

Drone provided the standard cache function until v0.4.
Cache function is to use build artifacts which saved in the end of previous build.

It reduce the build time to preserve node_modules for nodejs, bundler gems for ruby.From dron v0.5 disastablished standard cache function, instead provides plugin. Plugin function produces some problems. This article introduces alternative cache function.

Volume Cache Plugin

When you expected to the same behaviour as always, volume-cache plugin is available.

http://plugins.drone.io/drillster/drone-volume-cache/

This plugin is able to save the build artifacts on the arbitrary path which is running agent.

Enabling cache is to put the step of cache restore and rebuild between build as follows.


pipeline:
restore-cache:
image: drillster/drone-volume-cache
restore: true
mount:
./node_modules
volumes:
/tmp/cache:/cache
build:
image: node
commands:
npm install
rebuild-cache:
image: drillster/drone-volume-cache
rebuild: true
mount:
./node_modules
volumes:
/tmp/cache:/cache

view raw

.drone.yaml

hosted with ❤ by GitHub

Because this flag is able to set by administrator only, management will be heavy task. Also trusted flag open docker socket to plugins, so the user can mount on any paths on the agent host.

It is the best choice in small scale usage.

S3 Cache Plugin

s3-cache plugin which uses AWS S3 as cache storage is also available.

http://plugins.drone.io/drone-plugins/drone-s3-cache/

When your drone is hosted on AWS, this plugin is convenient choice, If not, you can choice S3 compatible storages. Minio is the best for you own hosted drone.

https://minio.io/

Minio is able to launch on docker immediately, works fine with drone.
In docker-compose, the settings are as follows.


minio:
image: minio/minio
ports:
"9000:9000"
volumes:
/var/lib/minio/data:/data
/var/lib/minio/config:/root/.minio
command: server /data
environment:
"MINIO_ACCESS_KEY=DRONE"
"MINIO_SECRET_KEY=DRONEDRONE"

https://gist.github.com/ashphy/217b349824b29cd29c20d296b9689ff6.js

Minio uses 9000 port as default, however the drone v0.8 uses 9000 port for gRPC to communicate with agent.
To resolve port conflict, launch on other node, or listen port 80.

Minio is simple storage for single user. Because of this, the access and secret key is need to share with all users. But this is not going to matter as much for cache storage.

Setting example is as follows, but the url option uses on document for S3 endpoint setting, “endpoint” is required instead of url option.


pipeline:
restore-cache:
image: plugins/s3-cache:1
pull: true
endpoint: http://minio:9000
access_key: DRONE
secret_key: DRONEDRONE
restore: true
build:
image: node
commands:
npm install
rebuild-cache:
image: plugins/s3-cache:1
pull: true
endpoint: http://minio:9000
access_key: DRONE
secret_key: DRONEDRONE
rebuild: true
mount:
node_modules
flush_cache:
image: plugins/s3-cache:1
pull: true
endpoint: http://minio:9000
access_key: DRONE
secret_key: DRONEDRONE
flush: true
flush_age: 14

view raw

s3-cache.yml

hosted with ❤ by GitHub

https://gist.github.com/ashphy/1721f4192e7088a3ba49f47d2395d769.js

Default cache path is ///. Basically it generate the cache on each branch, it reduce effectiveness of cache when in workflow creating branch on every feature requests like a Github Flow. If you specify the path option with a fixed value as below, the cache can be used the same artifact every time.


restore-cache:
image: plugins/s3-cache:1
pull: true
endpoint: http://minio:9000
access_key: DRONE
secret_key: DRONEDRONE
restore: true
path: '/org-name/repo-name/'
rebuild-cache:
image: plugins/s3-cache:1
pull: true
endpoint: http://minio:9000
access_key: DRONE
secret_key: DRONEDRONE
rebuild: true
path: '/org-name/repo-name/'
mount:
node_modules

view raw

s3-path.yml

hosted with ❤ by GitHub

https://gist.github.com/ashphy/bfdff65576adf8a81279a00ae4790620.js

Organization name less then 3 characters

If your drone connected to github and organization name is less than 3 characters, rebuild-cache will fail because the bucket name violates the validation rule of 3 or more characters. (It will deadlock and die on the rebuild step)
In this case, using path option to avoid this problem.

flush

flush is the function to erase old cache.
Since minio can be seen regular files from the host side, if tmpwatch target include the minio mount path, the user do not have to explicitly write flush_cache.

Conclutions

The cache function has been deleted standard features in v0.5 but now users can freely delete rotten caches by the plugin. Try more drone.

Measuring pending tests for OSS drone.io v0.4

Number of executable tests at same time are as same as registered number of docker nodes. When exceeds the limit, rest of the test will be delayed. Measuring the load is important, because long task will easily block the other many tests.

$ sqlite3 /var/lib/drone/drone.sqlite "select count(*) from jobs where job_status = 'pending';"
2

if you want to get more detailed information,

select repo_owner, repo_name, repo_private, build_branch from jobs inner join builds on jobs.job_build_id = builds.build_id inner join repos on builds.build_repo_id = repos.repo_id where job_status = 'pending'

Job status list and schema are available on github.

I created the bot that is watching the pending tests, and notify to the hipchat.

Encrypting secrets in OSS drone.io v0.4

OSS drone.io works with other services in notifications and deployments.

But once you commit a password or authentication token for other services writes in drone.yml, your sensitive data will be public.
The drone provides “secrets” that encrypting your sensitive data. the official documents describe the way using command line tool, however you can generate the secrets on the Web UI.

Generating secrets

At first, open your repository setting page on the drone, and select the “SECRETS” tab.

Input your secrets under environment node as yaml files like below.

drone-secrets

Generate and copy the output text into “.drone.sec” file on the top level of repository.

Refer the secrets

You can refer the secrets using $$ in .drone.yml. If hipchat notification settings

notify:
  hipchat:
    auth_token: $$HIPCHAT_TOKEN
    room_id_or_name: 'test'
    notify: true