First Steps with Platform Automation¶
This lab introduces you tot he Platform Automation project by having you build a pipeline that exercises a test Platform Automation task.
Download the artifacts¶
Navigate to the Tanzu Network and locate the product named Platform Automation.
Note the two artifacts: Concourse Tasks and the Docker Image for Concourse Tasks.
You already have the docker image from a prior lab.
You used it to run a local docker container in order to invoke the command p-automator
which in turn created the VM instance in GCP.
Navigate to ~/workspace/artifacts
and download platform-automation-tasks using the pivnet CLI.
Note the size of the artifact.
Look inside the file:
unzip platform-automation-tasks*.zip
Each task is a pair of files: a shell script and a yaml wrapper. The yaml is the Concourse task definition; it delegates to the shell script.
Review the test task (test.yml
and test.sh
).
Note how the shell script invokes p-automator
and om
. Both p-automator
and om
are provided by the docker image.
Platform Automation¶
-
Read about platform automation.
-
Peruse the list of tasks this project provides
-
Review the task named test
-
Review its usage: Click on the Usage tab
- task: test file: platform-automation-tasks/tasks/test.yml image: platform-automation-image
Your first pipeline¶
Navigate to ~/workspace/pipelines
.
Edit a new pipeline in a file named test-pipeline.yml
.
This pipeline will have a single job. Name it test-job
.
Copy the test task usage as a step in the job's plan, like so:
jobs:
- name: test-job
plan:
- task: test
file: platform-automation-tasks/tasks/test.yml
image: platform-automation-image
In order for this job to work, it's clear that we need both the image and the tasks.
insert get
steps into your job to fetch these artifacts.
Your pipeline should now look like this:
jobs:
- name: test-job
plan:
- in_parallel:
- get: platform-automation-tasks
params: { unpack: true }
- get: platform-automation-image
params: { unpack: true }
- task: test
file: platform-automation-tasks/tasks/test.yml
image: platform-automation-image
We could configure Concourse to fetch the image and tasks directly from pivnet. But that's not a good practice.
Instead you will create a blobstore in the form of a Google Cloud Storage (gcs) bucket, and place the artifacts you downloaded into the bucket.
Then you will be able to model the two files as resources from gcs in Concourse, with the help of the GCS resource type for Concourse.
Create the blobstore¶
In the spirit of DevOps, instead of manually creating the blobstore, we're going to automate this with terraform.
-
Navigate to
~/workspace/paving-concourse
, and invoke these commands:git checkout blobstore git diff master
On the branch named blobstore is a new file, blobstore.tf
that creates a bucket, a service account, a corresponding service account key, and a permission for the service account to access storage.
Note also that the file outputs.tf
contains two new outputs: the generated bucket name and the generated service account key, necessary to programmatically read and write from that bucket.
Re-read the instructions from the repository's README.md
. To apply this change:
- re-run the
refresh
command - run the
plan
command - run the
apply
command
Make a note of the outputs blobstore_bucket_name
and blobstore-service-account-key
; you will be using them shortly.
Copy the artifacts to the bucket¶
You will automate this step in the next lab.
The gcloud sdk comes with a utility named gsutil
that is useful for manually copying artifacts from and to buckets.
Familiarize yourself with gsutil
¶
Navigate back to ~/workspace/artifacts
.
Invoke the gsutil ls
command. Do you see your bucket?
Can you figure out how to use gsutil cp
to copy both the platform-automation-tasks
file and the platform-automation-image
files to your bucket?
After you have copied the files, use the gsutil ls
command to verify that the two files are indeed inside your bucket.
Back to the pipeline¶
Navigate back to ~/workspace/pipelines
.
Model the two artifacts from gcs in the file test-pipeline.yml
as follow:
resources:
- name: platform-automation-tasks
type: gcs
source:
bucket: ((bucket))
regexp: platform-automation-tasks-(.*).zip
json_key: ((json_key))
- name: platform-automation-image
type: gcs
source:
bucket: ((bucket))
regexp: platform-automation-image-(.*).tgz
json_key: ((json_key))
Concourse doesn't know about the resource type gcs so you must define it (see https://github.com/frodenas/gcs-resource):
resource_types:
- name: gcs
type: docker-image
source:
repository: frodenas/gcs-resource
Put the bucket and json key in CredHub¶
The references to the bucket
and json_key
will resolve against CredHub.
Edit a file named credentials.yml
like so:
---
credentials:
- name: /concourse/main/bucket
type: value
value: put-the-name-of-your-bucket-here
- name: /concourse/main/json_key
type: value
value: |
value of your json key goes here
in place of this text that you're reading
make sure it's indented two spaces in
like i'm showing here
And import it into credhub with:
credhub import -f credentials.yml
-
Use the
credhub find
andcredhub get
commands to satisfy yourself that your credentials were stored safely in credhub. -
Delete
credentials.yml
.
The pipeline is now complete.
-
Set the pipeline:
fly -t main set-pipeline -p test-pipeline -c test-pipeline.yml
Unpause the pipeline, and trigger the job. Does it pass? Inspect the output of the job.
Exercise
Run a bash shell in docker locally on your jumpbox using the same platform automation image.
Run the same commands that test.sh
runs:
p-automator --help
om --help
The same thing takes place inside Concourse when you run your job.