This is the third part in a multi-part series about the automation of VMware vSphere template builds. A template is a pre-installed virtual machine that acts as master image for cloning virtual machines from it. In this series I am describing how to get started on automating the process. As I am writing these blog post while working on the implementation some aspects may seem “rough around the edges”.

  • Part 1: Overview/Motivation
  • Part 2: Building a pipeline for a MVP
  • Part 3: An overview of packer
  • Part 4: Concourse elements and extending the pipeline

If all went well, part 2 left you with a functional Ubuntu template. I opted for a quick win (result) to hook you up for the topic of automation but didn’t really get into the details of what is happening in the background. This part will actually start at the bottom by taking a look at the packer configuration and setup.

Overview#

For this part I assume that you completed part 2 successfully and you are able to re-run the pipeline whenever you want.

The blog post covers:

  • The packer binary
  • Packer configuration

Why Packer?#

Let me start with a quote from the official packer homepage:

HashiCorp Packer is easy to use and automates the creation of any type of machine image.

It is also open source, under active development, available for major operating systems and widely used, making it somewhat the de-facto standard for automated builds of machine images (a.k.a templates). Oh, and on a side note, I would also argue that packer itself is easy to learn but automating the OS installation isn’t :-)
There are a ton of blog posts about packer available, make sure to take a look around for more information to grasp the full potential.

The executable#

Packer consists of a single go binary that can be run with no further dependencies which makes it ideal for shipping it with a container or just getting started right now on your local workstation. Running packer without arguments shows you all available commands:

packer
Usage: packer [--version] [--help] <command> [<args>]

Available commands are:
    build       build image(s) from template
    console     creates a console for testing variable interpolation
    fix         fixes templates from old versions of packer
    inspect     see components of a template
    validate    check that a template is valid
    version     Prints the Packer version

For this blog post I am concentrating on the build command as the focus of this blog is creating images. Thinking ahead, additional steps could be added into the template pipeline, e.g. to be validate changes whenever the packer configuration file is modified.

Plugins#

Packer can be extended with plugins for additional functionality. For the template pipeline I am using the popular Packer Builder for VMware vSphere in version 2.3 from JetBrains.
The main advantage over the native packer provisioner (more on that later) is the ability to use the vSphere API which makes the process a lot smoother with less additional dependencies (I am inclined to call some of the steps in the native implementation as more or less “hacks”).
At the time of writing I am forced to downgrade to packer version 1.4.5 as the more recent versions are affected by an issue in conjunction with the JetBrains plugin. Once the issue is resolved I will update my container the most recent version.

Packer configuration#

Once you run packer with the build command you are expected to provide a template.

packer build --help
Usage: packer build [options] TEMPLATE
[...]

Not to confuse terms here, a packer template is a JSON file that consists of
Builders: Create and generate the machine image
Provisioners: Configure the machine image, also with integration into tools like ansible Communicator: Used by packer to interact with the machine, e.g. to upload files, execute scripts, etc.
(Optional) Variables
(Optional) Post-Processors: Run tasks after the image is created

More on the how and why in the next sections.

Variables#

I listed the variables as optional but that doesn’t mean it is a good idea to work with hard coded values all the time :-)
In my example I decided to split the variables into two parts, following the principle of don’t repeat yourself (DRY):

  • Common variables (here: common-vars.json) are valid for any template that is being created, e.g. the vCenter.
  • Variables in actual packer template json file (here: ubuntu-180403-docker.json) are specific to this build.
.
└── packer
    ├── common
    │   └── common-vars.json
    └── ubuntu-180403-docker.json

Once you have variables that are not part of your main template file you need to specify them either as command line arguments or by using the -var-file= parameter as I did.

Builders#

This section of the packer configuration lays the foundation of the template creation:
You describe where the template will be placed, by which method the template is created, how the template is defined from a virtual machine perspective and any OS specific automation scripts (preeseed, kickstart, …) for the actual installation are provided. Most parts of the parameters are either self-explanatory or well documented at the github page of the plugin but I want to highlight a few things:

 "type": "vsphere-iso"

This calls the jetbrains plugin with the instruction to build the template from the ground up by providing an installation image (a.k.a. an ISO). The source for the installation file is defined by all parameters starting with iso. Once you want to extend my example by added more templates for different operation systems you start here to adjust the sources. Also, as mentioned in the second part, it is mandatory to supply a checksum for the ISO file.

   "iso_paths": [
      "[Datastore1] OS-ISO/ubuntu-18.04.3-server-amd64.iso"
    ],
    "iso_checksum": "cb7cd5a0c94899a04a536441c8b6d2bf",
    "iso_checksum_type": "md5",

The actual setup of the operating system is defined in the unattended installation file of the OS (here: preseed_180403.cfg). In case of this example I am using preeseed on the Debian-based Ubuntu, going into details here would go very much beyond the scope and can be worth another one or two blog posts. I opted for handing over the preeseed file by floppy so that I do not need to worry about a webserver, wither to expose the port from the container or placing it on just another machine - but you can always adjust it to your needs.

  "floppy_files": [
      "{{template_dir}}/ubuntu180403docker/preseed_180403docker.cfg"
    ]

The preeseed file is referenced in the boot command, in this parameter actual keystrokes will be passed to the VM for the bare metal installation. I recommend searching for examples for each OS family.

  "boot_command": [
   [...]
      " file=/media/preseed_180403docker.cfg",
      " -- <wait>",
      "<enter><wait>"
    ]

Provisioners#

Once the builder has created a running image the final adjustments inside the template can me made by using provisioners. I am copying a configuration file to the template VM (here: daemon.json) which will be later handled by a simple shell script for the docker installation (here: packer-ubuntu18-cleanup.sh).

        {
            "type": "file",
            "source": "common/files/daemon.json",
            "destination": "/tmp/daemon.json"
          },
        {
        "type": "shell",
        "scripts": [
            "ubuntu180403docker/scripts/packer-script-native-docker.sh",
            "ubuntu180403docker/scripts/packer-ubuntu18-cleanup.sh"
          ]    
        }

Provisioners are very powerful and their integration into configuration management tools like ansible should be very high on your priority list for follow-up reading.

Post-processors#

When the builders and provisioners have created the template, post-processors allow further handling of the build artifact. In my example I tell packer to create a build manifest along with some custom information. The avid reader might have noticed these lines in my pipeline configuration example:

          outputs:
            - name: packer-build-manifest
              path: packer-files/packer/packer-manifest

This means the manifest is pushed back to the pipeline and the build information can be processed further down in the following jobs and tasks.

Note: I corrected a few lines in my example repository as the manifest wasn’t created and exported. Please make sure to pull a fresh build from git.

Additional parameters on execution#

As a parting note, a few words about the additional parameters that I added to the packer call in the pipeline:

packer build -force -timestamp-ui -var-file=common/common-vars.json ubuntu-180403-docker.json

The -force parameter is a nice feature for testing as the next packer run will simply delete the existing template VM and re-create it. It will not work when the template VM is powered on and it might be a good idea to remove this parameter once you move your setup to prod. You can spot the use of the force parameter by this line in the packer output:

2020-01-23T09:26:29Z: ==> vsphere-iso: the vm/template ubuntu_18.04.03_docker_v1.0 already exists, but deleting it due to -force flag

The -timestamp-ui-parameter does exactly what the name says and adds a reference when comparing events between different log files.


Summary#

Hopefully this high-level overview of packer provided you with an idea where to turn the knobs when you want to adjust the template to your needs. There is a ton of options I did not and could not cover but I trust in the power of search engines for the more advanced use cases. In part four I am planning to go into the concourse ci setup.