Packer
Standardize artifacts across multiple cloud providers
As your organization grows, you may adopt a hybrid or multi-cloud strategy to enable innovation, increase resiliency, decrease costs, or integrate different systems. Packer is a cloud-agnostic tool that lets you build identical artifacts for multiple platforms from a single template file. By tracking your build metadata through HCP Packer, you can query it for future downstream Packer builds or reference artifacts in your Terraform configuration.
In this tutorial, you will build and deploy an artifact containing HashiCups, a fictional coffee-shop application, in AWS and Azure. To do so, you will use Packer to build and store the artifacts in AWS and Azure, push the build metadata to HCP Packer, and use Terraform to deploy the artifacts to their respective cloud providers. In the process, you will learn how to use Packer and HCP Packer to standardize artifacts across multi-cloud and hybrid environments.
Prerequisites
This tutorial assumes that you are familiar with the workflows for Packer, HCP Packer, and either Terraform Community Edition or HCP Terraform. If you are new to Packer, complete the Packer Get Started tutorials first. If you are new to HCP Packer, complete the Get Started with HCP Packer tutorials.
HCP Terraform is a platform that you can use to manage and execute your Terraform projects. It includes features like remote state and execution, structured plan output, workspace resource summaries, and more. The workflow for HCP Terraform is the same as Terraform Community Edition.
Select the Terraform Community Edition tab if you would rather complete this tutorial using Terraform Community Edition.
If you are new to Terraform, complete the Get Started tutorials first. If you are new to HCP Terraform, complete the HCP Terraform Get Started tutorials.
Next, you will need Terraform 1.2+ installed locally.
You will also need an HCP Terraform account with HCP Terraform locally authenticated.
In this tutorial, you will use the Terraform CLI to create an HCP Terraform workspace and trigger remote apply runs.
Now, install Packer 1.10.1+ locally.
You will also need an HCP account with an HCP Packer Registry.
Next, create a new HCP service principal and set the following environment variables locally.
Environment Variable | Description |
---|---|
HCP_CLIENT_ID | The client ID generated by HCP when you created the HCP Service Principal |
HCP_CLIENT_SECRET | The client secret generated by HCP when you created the HCP Service Principal |
HCP_PROJECT_ID | Find this in the URL of the HCP Overview page, https://portal.cloud.hashicorp.com/orgs/xxxx/projects/PROJECT_ID |
You will also need an AWS account with credentials set as local environment variables.
Environment Variable | Description |
---|---|
AWS_ACCESS_KEY_ID | The access key ID from your AWS key pair |
AWS_SECRET_ACCESS_KEY | The secret access key from your AWS key pair |
If you do not have one already, create an Azure account.
In your Azure account, create an Azure Active Directory Service Principal scoped to your Subscription, with the Contributor role, and an application secret. Be sure to copy the application secret value generated by Azure. Then, set the following environment variables.
Environment Variable | Description |
---|---|
ARM_CLIENT_ID | The Application (client) ID from your Azure Service Principal |
ARM_CLIENT_SECRET | The value generated by Azure when you created an application secret for your Azure Service Principal |
ARM_SUBSCRIPTION_ID | Your Azure subscription id |
ARM_TENANT_ID | The Directory (tenant) ID from your Azure Service Principal |
Next, create an Azure Resource Group in the US West 3 region and set the following environment variable.
Environment Variable | Description |
---|---|
TF_VAR_azure_resource_group | The name of the Azure Resource Group you created. Packer will store artifacts here, and Terraform will create HashiCups infrastructure here. |
Clone repository
In your terminal, clone the example repository.
$ git clone https://github.com/hashicorp-education/learn-packer-multicloud
Navigate to the cloned repository.
$ cd learn-packer-multicloud
Review Packer template
The packer
directory contains files Packer uses to build artifacts.
In your editor, open variables.pkr.hcl
. Packer uses the environment variables you set earlier for the first four variables.
packer/variables.pkr.hcl
variable "arm_client_id" {
type = string
default = env("ARM_CLIENT_ID")
}
variable "arm_client_secret" {
type = string
default = env("ARM_CLIENT_SECRET")
}
variable "arm_subscription_id" {
type = string
default = env("ARM_SUBSCRIPTION_ID")
}
variable "azure_resource_group" {
type = string
default = env("TF_VAR_azure_resource_group")
}
variable "azure_region" {
type = string
default = "westus3"
}
variable "aws_region" {
type = string
default = "us-west-1"
}
Now, open packer/build.pkr.hcl
.
The azure-arm.ubuntu-lts
source block uses the client_id
, client_secret
,
and subscription_id
parameters to authenticate to Azure. Packer retrieves an Ubuntu 22.04 image to use as the base image and stores built images in the resource group specified by the managed_image_resource_group_name
parameter.
packer/build.pkr.hcl
source "azure-arm" "ubuntu-lts" {
client_id = var.arm_client_id
client_secret = var.arm_client_secret
subscription_id = var.arm_subscription_id
os_type = "Linux"
image_offer = "0001-com-ubuntu-server-jammy"
image_publisher = "Canonical"
image_sku = "22_04-lts"
managed_image_resource_group_name = var.azure_resource_group
## ...
}
The amazon-ebs.ubuntu-lts
source block retrieves an Ubuntu 22.04 AMIs to use as a base image, from the region specified in the region
variable. The Amazon plugin for Packer uses the AWS credential environment variables you set earlier to authenticate to AWS.
packer/build.pkr.hcl
source "amazon-ebs" "ubuntu-lts" {
source_ami_filter {
filters = {
virtualization-type = "hvm"
name = "ubuntu/images/hvm-ssd/ubuntu-jammy-22.04-amd64-server-*"
root-device-type = "ebs"
}
owners = ["099720109477"]
most_recent = true
}
region = var.aws_region
## ...
}
The build
block references the artifact sources defined in the source blocks. Packer standardizes your artifacts across clouds by following the same instructions to build both artifacts.
packer/build.pkr.hcl
build {
source "source.amazon-ebs.ubuntu-lts" {
name = "hashicups"
}
source "source.azure-arm.ubuntu-lts" {
name = "hashicups"
location = var.azure_region
managed_image_name = "hashicups_${local.date}"
}
# systemd unit for HashiCups service
provisioner "file" {
source = "hashicups.service"
destination = "/tmp/hashicups.service"
}
# Set up HashiCups
provisioner "shell" {
scripts = [
"setup-deps-hashicups.sh"
]
}
## ...
}
First, Packer creates a virtual machine from each source image in both cloud providers. Then, it copies the HashiCups systemd unit file to each machine and runs the setup-deps-hashicups.sh
script to install and configure HashiCups. When the script finishes, Packer asks each cloud provider to create a new image from each virtual machine.
Finally, Packer sends artifact metadata from the newly built images to the specified HCP Packer registry bucket so you can reference the artifacts.
packer/build.pkr.hcl
## ...
# HCP Packer settings
hcp_packer_registry {
bucket_name = "learn-packer-multicloud-hashicups"
description = <<EOT
This is an image for HashiCups.
EOT
bucket_labels = {
"hashicorp-learn" = "learn-packer-multicloud-hashicups",
}
}
}
Build HashiCups artifacts
Change into the packer
directory.
$ cd packer
Initialize the Packer template to install the required AWS and Azure plugins.
$ packer init .
Installed plugin github.com/hashicorp/azure v1.0.6 in "…"
Installed plugin github.com/hashicorp/amazon v1.0.9 in "…"
Packer installs the plugins specified in the required plugins
block in the build.pkr.hcl
template. Packer plugins are standalone applications that you can use to perform tasks during builds. They extend Packer's capabilities, similarly to Terraform providers.
Now, build the HashiCups artifacts in both AWS and Azure.
Note
It may take up to 15 minutes for Packer to build the artifacts.
$ packer build .
amazon-ebs.ubuntu-hirsute: output will be in this color.
azure-arm.hashicups: output will be in this color.
==> azure-arm.hashicups: Publishing build details for azure-arm.hashicups to the HCP Packer registry
==> amazon-ebs.ubuntu-hirsute: Publishing build details for amazon-ebs.ubuntu-hirsute to the HCP Packer registry
## ...
Build 'amazon-ebs.ubuntu-hirsute' finished after 4 minutes 58 seconds.
## ...
Build 'azure-arm.hashicups' finished after 5 minutes 32 seconds.
==> Wait completed after 5 minutes 32 seconds
==> Builds finished. The artifacts of successful builds are:
--> azure-arm.hashicups: Azure.ResourceManagement.VMImage:
OSType: Linux
ManagedImageResourceGroupName: packer-rg
ManagedImageName: hashicups_0419
ManagedImageId: /subscriptions/1d1a90a0-b25a-4336-9303-8316efd81952/resourceGroups/packer-rg/providers/Microsoft.Compute/images/hashicups_0419
ManagedImageLocation: westus3
--> azure-arm.hashicups: Published metadata to HCP Packer registry packer/learn-packer-multicloud-hashicups/versions/01HMRZH96S3X418EW8F5FNN9RF
--> amazon-ebs.hashicups: AMIs were created:
us-west-1: ami-0a7afa1b40592d366
--> amazon-ebs.hashicups: Published metadata to HCP Packer registry packer/learn-packer-multicloud-hashicups/versions/01HMRZH96S3X418EW8F5FNN9RF
Packer builds artifacts in parallel in each cloud provider, reducing the total build time.
Continue on to the next section while the build completes to learn how to deploy the artifacts to multiple clouds using Terraform. To skip the deployment step, proceed to Clean up your infrastructure.
Review Terraform configuration for HashiCups
The terraform
directory contains the Terraform configuration to deploy the
HashiCups machine images to Azure and AWS.
Open terraform/variables.tf
. This file contains variables used by the rest of the configuration. The aws_region
and azure_region
variables control which artifact metadata Terraform should request from HCP Packer for, and the regions where Terraform will deploy HashiCups images and infrastructure.
variable "aws_region" {
description = "The AWS region Terraform should deploy your instance to"
default = "us-west-1"
}
variable "azure_region" {
description = "The Azure region Terraform should deploy your instance to"
default = "westus3"
}
variable "cidr_vpc" {
description = "CIDR block for the VPC"
default = "10.1.0.0/16"
}
variable "cidr_subnet" {
description = "CIDR block for the subnet"
default = "10.1.0.0/24"
}
variable "environment_tag" {
description = "Environment tag"
default = "Learn"
}
variable "hcp_bucket_hashicups" {
description = "HCP Packer bucket name for hashicups image"
default = "learn-packer-multicloud-hashicups"
}
variable "hcp_channel" {
description = "HCP Packer channel name"
default = "production"
}
variable "azure_resource_group" {
description = "Azure Resource Group name where Terraform will create infrastructure"
}
Warning
Ensure that the values assigned to aws_region
and azure_region
match the values for the corresponding Packer variables. If you changed the value of the Packer variables during build, change the Terraform variable values, too.
Now, open terraform/hcp.tf
. This configuration retrieves artifact information from
HCP Packer using data sources.
The hcp_packer_version.hashicups
data source retrieves artifact
version information from the production
channel of the learn-packer-multicloud-hashicups
bucket. These values are the defaults for the configuration's input variables.
terraform/hcp.tf
data "hcp_packer_version" "hashicups" {
bucket_name = var.hcp_bucket_hashicups
channel_name = var.hcp_channel
}
The hcp_packer_artifact
data sources use the version ID from the
hcp_packer_version
data source to retrieve an artifact ID for
each cloud provider. Notice the differences in the platform
and
region
attributes between the two data sources.
terraform/hcp.tf
data "hcp_packer_artifact" "aws_hashicups" {
bucket_name = data.hcp_packer_version.hashicups.bucket_name
version_fingerprint = data.hcp_packer_version.hashicups.fingerprint
platform = "aws"
region = var.aws_region
}
data "hcp_packer_artifact" "azure_hashicups" {
bucket_name = data.hcp_packer_version.hashicups.bucket_name
version_fingerprint = data.hcp_packer_version.hashicups.fingerprint
platform = "azure"
region = var.azure_region
}
Open terraform/aws.tf
. This configuration defines a VPC and network resources,
an AWS EC2 instance running HashiCups,
and a security group with public access on port 80. Notice that the HashiCups instance
references the hcp_packer_artifact.aws_hashicups
data source.
terraform/aws.tf
resource "aws_instance" "hashicups" {
ami = data.hcp_packer_artifact.aws_hashicups.external_identifier
instance_type = "t2.micro"
subnet_id = aws_subnet.subnet_public.id
vpc_security_group_ids = [aws_security_group.hashicups.id]
associate_public_ip_address = true
tags = {
Name = "HashiCups"
Name = "Learn-Packer-MultiCloud"
}
}
Open terraform/azure.tf
. This configuration defines a VPC and network resources,
an Azure virtual machine running HashiCups,
and a security group with public access on port 80. The virtual machine references the hcp_packer_artifact.azure_hashicups
data source.
Warning
This configuration hardcodes admin credentials for the Azure virtual machine for demo purposes. Do not hardcode credentials in production.
terraform/azure.tf
resource "azurerm_linux_virtual_machine" "hashicups" {
name = "${var.prefix}-vm"
source_image_id = data.hcp_packer_artifact.azure_hashicups.external_identifier
resource_group_name = data.azurerm_resource_group.main.name
location = data.azurerm_resource_group.main.location
size = "Standard_B1s"
admin_username = "ubuntu"
admin_password = "adminPass1!"
disable_password_authentication = false
network_interface_ids = [azurerm_network_interface.main.id]
os_disk {
caching = "ReadWrite"
storage_account_type = "Standard_LRS"
}
tags = {
name = "hashicups"
learn = "learn-packer-multicloud"
}
}
Initialize configuration
Change to the terraform
directory.
$ cd ../terraform
Set the TF_CLOUD_ORGANIZATION
environment variable to your HCP Terraform
organization name. This will configure your HCP Terraform integration.
$ export TF_CLOUD_ORGANIZATION=
Initialize your configuration. Terraform will automatically create the learn-packer-multicloud
workspace in your HCP Terraform organization.
$ terraform init
Initializing HCP Terraform...
Initializing provider plugins...
- Reusing previous version of hashicorp/aws from the dependency lock file
- Reusing previous version of hashicorp/azurerm from the dependency lock file
- Reusing previous version of hashicorp/hcp from the dependency lock file
- Installing hashicorp/aws v4.30.0...
- Installed hashicorp/aws v4.30.0 (signed by HashiCorp)
- Installing hashicorp/azurerm v3.22.0...
- Installed hashicorp/azurerm v3.22.0 (signed by HashiCorp)
- Installing hashicorp/hcp v0.44.0...
- Installed hashicorp/hcp v0.44.0 (signed by HashiCorp)
HCP Terraform has been successfully initialized!
You may now begin working with HCP Terraform. Try running "terraform plan" to
see any changes that are required for your infrastructure.
If you ever set or change modules or Terraform Settings, run "terraform init"
again to reinitialize your working directory.
In HCP Terraform, navigate to the learn-packer-multicloud
workspace.
Set the following workspace-specific variables. Set the correct type and be sure to mark the secrets as sensitive.
Variable name | Description | Type |
---|---|---|
ARM_CLIENT_ID | The Application (client) ID from your Azure Service Principal | Environment variable |
ARM_CLIENT_SECRET | The value generated by Azure when you created an application secret for your Azure Service Principal | Environment variable |
ARM_SUBSCRIPTION_ID | Your Azure subscription ID | Environment variable |
ARM_TENANT_ID | The Directory (tenant) ID from your Azure Service Principal | Environment variable |
AWS_ACCESS_KEY_ID | The access key ID from your AWS key pair | Environment variable |
AWS_SECRET_ACCESS_KEY | The secret access key from your AWS key pair | Environment variable |
HCP_CLIENT_ID | The client ID generated by HCP when you created the HCP Service Principal | Environment variable |
HCP_CLIENT_SECRET | The client secret generated by HCP when you created the HCP Service Principal | Environment variable |
azure_resource_group | The name of the Azure Resource Group you created. Packer will store images here, and Terraform will create HashiCups infrastructure here. | Terraform variable |
Wait for Packer to finish building your artifacts, then continue with the tutorial.
Verify artifacts
When Packer finishes building your artifacts, navigate to your learn-packer-multicloud-hashicups
bucket in the HCP Packer dashboard.
Click on Versions, then select the first version, labeled v1
. Notice that this
version has two builds — one for Azure, the other for AWS.
The respective AWS and Azure data sources in your Terraform configuration reference each of these artifacts.
Create HCP Packer channel
HCP Packer channels allow you to reference a specific build version in Packer or Terraform.
In the HCP console, click on Channels, then click on New Channel. Create
a new channel named production
and set it to the v1
version of your
learn-packer-multicloud-hashicups
bucket.
Terraform will query the production
channel to retrieve the Azure and AWS image IDs and deploy the appropriate artifacts.
Deploy artifacts
In your terminal, apply your configuration to deploy HashiCups artifacts in
both Azure and AWS. Respond yes
to the prompt to confirm the operation.
$ terraform apply
Running apply in HCP Terraform. Output will stream here. Pressing Ctrl-C
will cancel the remote apply if it's still pending. If the apply started it
will stop streaming the logs, but will not stop the apply running remotely.
Preparing the remote apply...
To view this run in a browser, visit:
https://app.terraform.io/app/hashicorp-learn/learn-packer-multicloud/runs/run-000
Waiting for the plan to start...
Terraform v1.1.6
on linux_amd64
Initializing plugins and modules...
## ...
Plan: 15 to add, 0 to change, 0 to destroy.
Changes to Outputs:
+ aws_public_ip = (known after apply)
+ azure_public_ip = (known after apply)
Do you want to perform these actions in workspace "learn-packer-multicloud"?
Terraform will perform the actions described above.
Only 'yes' will be accepted to approve.
Enter a value: yes
## ...
Apply complete! Resources: 15 added, 0 changed, 0 destroyed.
Outputs:
aws_public_ip = "54.193.153.230"
azure_public_ip = "104.42.116.112"
Visit the addresses from the aws_public_ip
and azure_public_ip
outputs on port 80
in your browser to view the HashiCups application.
Tip
It may take several minutes for the setup script to complete on each instance. If you cannot view the HashiCups dashboard or receive a error response, please wait a few minutes before trying again.
You successfully built and deployed identical artifacts across multiple clouds with Packer and Terraform.
Clean up your infrastructure
Before moving on, destroy the infrastructure you created in this tutorial.
In the terraform
directory, destroy the infrastructure for the HashiCups application. Respond yes
to the prompt to confirm the operation.
$ terraform destroy
Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols:
- destroy
Terraform will perform the following actions:
##...
Plan: 0 to add, 0 to change, 15 to destroy.
##...
Do you really want to destroy all resources?
Terraform will destroy all your managed infrastructure, as shown above.
There is no undo. Only 'yes' will be accepted to confirm.
Enter a value: yes
##...
Destroy complete! Resources: 15 destroyed.
Delete Azure resource group and artifacts
Your Azure account still has machines images in the resource group you created for this tutorial.
In the Azure portal, visit Resource groups. Then, select the name of the resource group you created for this tutorial.
If you want to keep the Resource Group, select all images in the resource list whose name begins with hashicups_
. Then select Delete to delete them.
If you no longer need the resource group, select Delete resource group and follow the on-screen instructions to delete it. Azure will delete all resources contained in the resource group, including images, before deleting the group itself.
Delete AWS AMIs
Your AWS account still has AMIs and their respective snapshots, which you may be charged for depending on your other usage.
Note
Remember to delete the AMIs and snapshots in the region
where Packer created them. If you didn't update the aws_region
variable in the
terraform.tfvars
file, they will be in the us-west-1
region.
In your us-west-1
AWS account, deregister the
AMI
by selecting it, clicking on the Actions button, then the Deregister AMI
option, and finally confirm by clicking the Deregister AMI button in the
confirmation dialog.
Delete the snapshots by selecting the snapshots, clicking on the Actions button, then the Delete snapshot option, and finally confirm by clicking the Delete button in the confirmation dialog.
Clean up HCP Terraform resources
If you used the HCP Terraform workflow, navigate to your learn-packer-multicloud
workspace in HCP Terraform and delete the workspace.
Next steps
In this tutorial, you built artifacts from the same Packer template in AWS and Azure, pushed the metadata to HCP Packer, and deployed virtual machines using the built artifacts. In the process, you learned how you can use Packer and HCP Packer to standardize your artifacts as you adopt a multi-cloud strategy.
For more information on topics covered in this tutorial, check out the following resources.
- Complete the Build a Golden Image Pipeline with HCP Packer tutorial to build a sample application image with a golden image pipeline, and deploy it to AWS using Terraform.
- Complete the Set Up HCP Terraform Run Task for HCP Packer tutorial to learn how to set up run tasks that ensure your Terraform configuration uses artifacts.