Assemble the products¶
In this lab, you will assemble into the blobstore all the product artifacts needed to provision Pivotal Platform. The goal is not to fetch these products once, but to have a pipeline that can fetch new versions of these products daily.
Identify products to download¶
Here is a starting list of products to build a foundation:
- Ops Manager
-
Pivotal Application Service (PAS) (and corresponding Stemcell).
In this course, we will deploy the Small Footprint version of PAS. - Pivotal Healthwatch (and corresponding Stemcell)
A glance at the products the Tanzu Network shows that the PAS product is the largest, at approximately 10GB file. Healthwatch is approximately 2GB. The Ops Manager YAML for GCP on the other hand is tiny: in the order of 200 bytes.
Strategy¶
The source of these products is the Tanzu Network. The target (or destination) is the gcs blobstore.
With Concourse, one usually models both the source and target as resources. A job can then get the source resource and put it to the target resource.
Platform Automation recommends a different approach:
- Instead of modeling source resources, use a special task named
download-product
to fetch the artifacts from the Tanzu Network.
The download-product
task¶
Read about the download-product
task here.
Note the reference to an accompanying config file.
Here is a rough outline for our pipeline:
- define a target resource for each artifact in the gcs blobstore
- define one job per product to download (i.e.
fetch-opsman
,fetch-pas
,fetch-healtwatch
) - download each product using the
download-product
task - use a separate configuration file to configure the
download-product
task - put the downloaded artifact into the gcs blobstore
Where to store the configuration files¶
You will place configuration files in a git repository, separate from the pipeline.
A git repository can be modeled as a resource in Concourse.
We can add a step to the above outline:
- get the contents of the repository before invoking the
download-product
task.
The pipeline¶
Use the test pipeline from the previous lab as the starting point.
The test pipeline already defines the two platform automation resources,
which are necessary to run the download-product
task or any other
platform automation task.
- Navigate to
~/workspace/pipelines
. - Copy
test-pipeline.yml
to a new filefetch-products.yml
.
Start with a single product, the smallest product, the Ops Manager YAML artifact.
Rename the job from test-job
to fetch-opsman
.
The in_parallel
step is required for any job that uses a platform automation tasks, so keep it.
The documentation of donwload-product
provides example usage for downloading products from Pivnet.
- Grab a copy of the Pivnet Usage (yaml) from the docs.
- Replace the test task with the download-product task usage sample.
- Rename the task to
download-opsman-product
- The
CONFIG_FILE
value should readdownload-product/opsman.yml
- The
output_mapping
line is not needed. Delete that line.
Your job should look like this:
jobs:
- name: fetch-opsman
plan:
- in_parallel:
- get: platform-automation-tasks
params: { unpack: true }
- get: platform-automation-image
params: { unpack: true }
- task: download-opsman-product
image: platform-automation-image
file: platform-automation-tasks/tasks/download-product.yml
params:
CONFIG_FILE: download-product/opsman.yml
input_mapping:
config: configuration
The task reference for download product defines outputs. The downloaded product will be available in a subdirectory named downloaded-product
.
Append a put step to copy the downloaded product to gcs:
- put: opsman-product
params:
file: downloaded-product/*
Define a resource named opsman-product
patterned after the other two gcs products already in your pipeline:
- name: opsman-product
type: gcs
source:
bucket: ((bucket))
regexp: \[ops-manager,(.*)\].*.yml
json_key: ((json_key))
Remember that resources go in a different section of the pipeline yaml, under resources
.
The configuration files will come from a git repository Add a git repository resource:
- name: configuration
type: git
source:
private_key: ((config-repo-key.private_key))
uri: ((config-repo-uri))
Note that we name this resource configuration
.
In the job, under the in_parallel
step, append a get step to fetch the contents of the git repository:
- get: configuration
The Git Repository¶
- If you do not have a personal public GitHub account, head over to github.com and create one.
- Sign in to your github.com account
- Click the green New button to create a new git repository
- Name your repository platform-config
- Click Create repository
- Copy to your clipboard the set of instructions under the title ..or create a new repository on the command line
On the jumpbox:
- Create the directory
~/workspace/platform-config
and navigate into it. - Make sure that your working directory is
~/workspace/platform-config
- Paste the instructions into your shell
The last instruction, git push, will fail.
Setup a deploy key¶
In your GitHub repository in your browser:
- Click Settings
- Select the tab Deploy keys
- Open the guide on deploy keys in a separate tab
- Follow the link to the instructions to Run the
ssh-keygen
procedure
This will generate a keypair ~/.ssh/id_rsa
and ~/.ssh/id_rsa.pub
.
- Copy the contents of the public key (the one with the
.pub
suffix). - Back in your repository, click Add deploy key
- Give the key a title
- Paste the public key
- Check Allow write access
- Click Add key
Re-run the git push instruction:
git push -u origin master
Create the config file¶
study the documentation to figure out how to code download-product/opsman.yml
,
Grab the contents of the Tanzu Network tab as a starting point.
---
pivnet-api-token: ((pivnet-token))
pivnet-file-glob: "ops-manager-gcp-*.yml"
pivnet-product-slug: ops-manager
product-version-regex: ^2\.8\..*$
blobstore-bucket: ((bucket))
Notes
- A pivnet token must be supplied for the job to be authorized to fetch products
- The slug is the product name (matches the output of
pivnet products
) - The file glob is a pattern that matches the file to download
- The version regex specifies a range of versions to match
Save the file, add it to git, commit it, and push it to your repository.
Visit your repository on github and verify that the new file is now present.
Store new secrets¶
The new pipeline references two new secrets which are not yet in credhub:
config-repo-uri
config-repo-key
opsman.yml
also references a variable:
pivnet-token
In ~/workspace
, draft a new credentials.yml
file, like so:
---
credentials:
- name: /concourse/main/pivnet-token
type: password
value: enter-your-pivnet-token-in-my-place
- name: /concourse/main/config-repo-uri
type: value
value: git@github.com:what-is-your/repository-name.git
- name: /concourse/main/config-repo-key
type: ssh
value:
public_key: paste your public key here all on a single line
private_key: |
enter-the-contents-of-your-private-key-here
indented to spaces
like i'm showing here
Notes
- Be sure to use the ssh uri to your git repo
- The repository key is a key pair, of type
ssh
and has two parts (though we only dereference the private key) - Mind the indentation with yaml
Import the file into CredHub:
credhub import -f credentials.yml
- Verify that the credentials were imported successfully
- Delete the file
credentials.yml
.
One more thing¶
We overlooked one important item:
When placing a variable reference in a pipeline, Concourse automatically interpolates it against CredHub.
This happens for ((bucket))
, ((json_key))
, and the two new git variables we just added.
((pivnet-token))
is different. It is not in the pipeline yaml. It is referenced from a configuration file sitting in git.
Consequently, it does not get interpolated.
Platform Automation provides credhub-interpolate
to translate a given file or set of files against a given CredHub instance.
credhub-interpolate
¶
- Review the task documentation.
It must be given all of the information necessary to access credhub:
- a credhub server url,
- credhub username and password, and
- the credhub ca certificate
The task also needs to know under what prefix (/concourse/main
) to look up the variables in question.
Wedge a credhub interpolate step in between the get and download-product steps.
- task: interpolate-env-creds
image: platform-automation-image
file: platform-automation-tasks/tasks/credhub-interpolate.yml
params:
CREDHUB_CLIENT: ((credhub-client))
CREDHUB_SECRET: ((credhub-secret))
CREDHUB_SERVER: ((credhub-server))
CREDHUB_CA_CERT: ((credhub-ca-cert.certificate))
PREFIX: '/concourse/main'
INTERPOLATION_PATHS: download-product
SKIP_MISSING: false
input_mapping:
files: configuration
output_mapping:
interpolated-files: interpolated-configs
Above, the four new CredHub variables will be natively interpolated by Concourse, since they are referenced directly inside the pipeline.
It feels strange to have to put CredHub's own url into CredHub.
The interpolated files will be output to interpolated-configs
.
- For the subsequent step to use the interpolate files,
download-product
'sinput_mapping
must be revised fromconfiguration
tointerpolated-configs
.
The pipeline is now complete.
All that remains is putting four credhub variables into credhub.
The credentials¶
It just so happens you have the credhub server url, the admin user and its password, and the credhub ca certificate in ~/workspace/.envrc
.
Use those values for now.
In the next lab we correct this issue and create a dedicated user with a more narrow scope of permissions.
You know the drill:
-
create a
credentials.yml
file--- credentials: - name: /concourse/main/credhub-server type: value value: put-the-value-of-your-credhub-server-url-in-here-from-your-envrc-file - name: /concourse/main/credhub-client type: value value: credhub_admin - name: /concourse/main/credhub-secret type: password value: replace-me-with-your-credhub-secret - name: /concourse/main/credhub-ca-cert type: certificate value: certificate: | paste-your-credhub-ca-certificate here, indented two space like i'm showing here
-
import the credentials
credhub import -f credentials.yml
-
Verify that the secrets have been imported.
-
Delete
credentials.yml
Set the pipeline¶
Time to test the job.
fly -t main set-pipeline -p fetch-products -c fetch-products.yml
Unpause the pipeline and trigger the job.
Troubleshoot as needed to make the job pass.
Verify (using gsutil ls
) that the bucket now contains the the Ops Manager YAML artifact.
Add a daily trigger¶
Concourse provides a resource type called time
.
These can be used as triggers that kickstart a job based on a time interval.
Define a 24-hour time interval resource.
- name: daily
type: time
source:
interval: 24h
To trigger the job daily, append the following underneatch in_parallel .
- get: daily
trigger: true
Make sure to indent the above correctly within its data block.
Update your pipeline. The daily trigger will now appear as an input to fetch-opsman. The solid line indicates that it is a trigger.
Version control your pipelines¶
Our pipelines are starting to become a project in their own right.
In software development circles, version control is often referred to as a safety harness. Even if you muck up your pipeline, you can alwlays go back to a known good version.
- Navigate to
~/workspace/pipelines
- Initialize the directory as a local git repository
- Stage all three pipeline files and commit them
Feel free to configure a remote, but that is not necessary.
Rinse and repeat¶
Fetch Healthwatch¶
Can you figure out how write a job named fetch-healthwatch
with a task named download-healthwatch-product
?
The task needs to be configured by a product configuration file named healthwatch.yml
that you must add and push to your platform-config repository.
The configuration file¶
Use the pivnet
cli to figure out the inputs to the download-product configuration file:
- the name of the product slug
- the name of the file, from which you can extract a simple file glob
- what range of versions of the product you wish to download
Unlike the Ops Manager, Healthwatch has an accompanying stemcell.
The configuration file must also specify the stemcell_iaas
.
Use the docs as reference.
Configuration
The download-product configuration file.
---
pivnet-api-token: ((pivnet-token))
pivnet-file-glob: "p-healthwatch-*.pivotal"
pivnet-product-slug: p-healthwatch
product-version-regex: ^1\.8\..*$
blobstore-bucket: ((bucket))
stemcell-iaas: google
Be sure to push the new configuration file up to GitHub.
The resources¶
For Healthwatch, two artifacts are fetched: the product and accompanying stemcell.
This requires that you model two target resources in gcs, one for each. The difficulty here is the complexity of the target file name regexp.
If you get stuck, have a look at the solution below.
Resources
The target product resource.
- name: healthwatch-product
type: gcs
source:
bucket: ((bucket))
regexp: \[p-healthwatch,(.*)\]p-healthwatch-.*.pivotal
json_key: ((json_key))
The target stemcell resource.
- name: healthwatch-stemcell
type: gcs
source:
bucket: ((bucket))
regexp: healthwatch-stemcell/\[stemcells-ubuntu-xenial,(.*)\]light-bosh-stemcell-.*-google-kvm-ubuntu-xenial-go_agent\\.tgz
json_key: ((json_key))
The job¶
The last step of the fetch-healthwatch
job must put both the downloaded product and the downloaded stemcell to gcs.
Use an in_parallel step with two child put steps.
If you get stuck, peek at the answer below.
Job
The last step of the job.
- in_parallel:
- put: healthwatch-product
params:
file: downloaded-product/*
- put: healthwatch-stemcell
params:
file: downloaded-stemcell/*
Fetch PAS¶
This third job is structurally the same as healthwatch.
Remember to fetch the small footprint runtime product.
Attempt to write this yourself.
Once more, if you get stuck, have a peek at the help below.
PAS
Product configuration file:
---
pivnet-api-token: ((pivnet-token))
pivnet-file-glob: "srt-*.pivotal"
pivnet-product-slug: elastic-runtime
product-version-regex: ^2\.8\.\d+$
blobstore-bucket: ((bucket))
stemcell-iaas: google
Resources:
name: pas-product
source:
bucket: ((bucket))
json_key: ((json_key))
regexp: \[elastic-runtime,(.*)\]src-.*.pivotal
type: gcs
name: pas-stemcell
source:
bucket: ((bucket))
json_key: ((json_key))
regexp: pas-stemcell/\[stemcells-ubuntu-xenial,(.*)\]light-bosh-stemcell-.*-google-kvm-ubuntu-xenial-go_agent\\.tgz
type: gcs
After you set the pipeline, this job will take longer than the previous two due to the larger size of the artifact downloaded, and then uploaded to your gcs bucket.
Once you verified that the job succeeds, commit your pipeline changes to git.
Keeping Platform Automation up to date¶
In the previous lab, you manually downloaded and copied the Platform Automation artifacts (the tasks zip file and docker image) to the blobstore.
Each time the Platform Automation team published a new version of its product to the Tanzu Network, the updated artifacts will have to be downloaded and copied to the blobstore manually.
In this section, you will automate this chore.
This job is similar to the other jobs in this pipeline, with one caveat: you do not dogfood Platform Automation to fetch Platform Automation. You instead use the native Concourse way.
- Do not use
download-product
-
Use a custom Concourse resource type that knows how to fetch products from the Tanzu Network:
- name: pivnet type: docker-image source: repository: pivotalcf/pivnet-resource tag: latest-final
Add the above to the
resource_types
section of your pipeline. -
Model the source artifacts as a Concourse resource:
- name: platform-automation-pivnet type: pivnet source: api_token: "((pivnet-token))" product_slug: platform-automation product_version: 4\.(.*) sort_by: semver
-
Write a much simpler job (one that does not require setting up a git repository or interpolating credentials explicitly) that gets
platform-automation-pivnet
and puts to your already-defined target resourcesplatform-automation-image
andplatform-automation-tasks
.- name: fetch-platform-automation plan: - get: platform-automation-pivnet trigger: true - in_parallel: - put: platform-automation-tasks params: file: platform-automation-pivnet/*tasks*.zip - put: platform-automation-image params: file: platform-automation-pivnet/*image*.tgz
The trigger: true
setting ensures that when a new version of the product becomes available on the Tanzu Network, it will automatically trigger the job.
Update the pipeline, see the new job in the Concourse dashboard, and run it.
Once you verified that the job succeeds, git-commit the changes to the pipeline.