My Terraforming efforts continue in my homelab, but Terraform alone isn’t enough. Some of my services require ongoing management, and I want to capture that configuration as code to make it reproducible when needed. Puppet is perfect for that.
But if I’m already using Terraform, I don’t want to also have something like Foreman around to manage Puppet node classification. It would be redundant. I need a way to do that directly in Terraform.
Enter Consul.
The Concept
Puppet is fairly simple in concept. You write up a bunch of classes that define your server’s configuration. These then get applied to a node as specified by one of a few methods:
node
blocks in thesite.pp
file for your environment- Loaded from hiera via a small hack
- Obtained from an external node classifier, or ENC
The first one is a non-start for this project; I want to build classes that do things, and then tell terraform what classes to apply. Editing puppet code to do that is right out. The second one would work, but it’s a little ugly.
That leaves the ENC method, which is what Foreman-type systems use. This is, perhaps, the most flexible of the three in a number of ways, as you can also set which environment your puppet agent is supposed to be working in (eg. production vs. development).
An ENC is little more than a script that provides a chunk of YAML with a list of classes to apply, the environment settings, and an optional list of variables to set. It’s not complicated at all. So how do we integrate this with puppet?
We use Consul, of course.
Consul Saves The Day
The first part of this is finding a way to get data from terraform to Puppet. Consul’s KV store gives us a very easy way to do this; we can simply stick the YAML block into an appropriate key and have the ENC script look it up from there.
It starts with a supremely simple wrapper script that will serve as our ENC:
#!/bin/bash
curl -s -o- http://localhost:8500/v1/kv/puppet-enc/host/$1?raw
Add this to something like /etc/puppetlabs/code/puppet_enc
on your
puppetserver host, and then add the appropriate configuration to
puppet.conf
:
[master]
# ...
node_terminus = exec
external_nodes = /etc/puppetlabs/code/puppet_enc
Restart your server, and now when it needs to classify a host, it will
call the script with the FQDN as the argument. The script will ask the
server’s local consul agent (you do have that installed, right?)
for the value at /puppet-enc/host/${hostname}
and return it to the
server. The content needs to be in YAML format.
At this point, you can manually add working ENC entries if you so choose, simply by sticking the YAML into consul. The script can be extended if you’re using Consul’s ACLs; simply have curl also send the appropriate access token.
Simple enough, right?
Integrating With Terraform
Terraform makes this very easy: it has a provider already built in for Consul. All you have to do is set it up and go. If you’re not using ACLs, it’s actually a one-liner to configure the provider:
provider "consul" { }
There are, of course, parameters you can set to set up your security and whatnot; see the terraform documentation for more details. For now this will do.
Adding the ENC value to Consul for puppet to grab is as simple as
adding a consul_keys
resource. For example, let’s say you want to
have it set your environment and apply a role class based on
variables. It might look something like this:
resource "consul_keys" "puppet-enc" {
datacenter = "${var.consul_datacenter_name}"
key {
delete = true
path = "puppet-enc/host/${var.host_fqdn}"
value = <<-ENC
classes:
- ${var.puppet_role_class}
environment: ${var.puppet_environment}
ENC
}
}
And… that’s it.
The delete attribute tells the resource that it should delete the key if the resource is destroyed. This is handy for automatically keeping your ENC tree clean. Everything else can be specified by variables, or in whatever other way you want. Now as soon as this resource is created, the ENC data will be there for the ENC script to hand to puppet.
The only possible catch is ordering; if you’re firing off puppet as part of the provisioning process, you want to make sure that this resource gets created before your instance does. So in my vmware world, I might do something like this:
resource "vsphere_virtual_machine" "vm" {
#
# ...attributes for the VM...
#
depends_on = [ "consul_keys.puppet-enc" ]
}
That will ensure that Terraform always adds the ENC data first.
And that’s all you need for the most basic puppet integration. This can obviously be expanded; more stuff can be added to the ENC data, etc. But it really is that simple!
Automatically Removing Puppet Certificates
The only remaining problem is that you’re likely to run into trouble if you destroy and then rebuild a host. At that point, puppetserver will already have a certificate for the host in question, but the host itself won’t because it’s new. Puppet runs will fail.
The easiest way to solve this problem is to teach Terraform to clear
out the puppet certificate when the host is destroyed. In order to do
that, you first need to set the puppetserver’s auth.conf
appropriately. It’s a big file; you’re looking for the entry for
/puppet-ca/v1/certificate_status
. Note that there are two similar
entries; you don’t need the one that is “statuses”; just the singular
one. When you’re done it should look like this:
{
# Allow the CA CLI to access the certificate_status endpoint
match-request: {
path: "/puppet-ca/v1/certificate_status"
type: path
method: [get, put, delete]
}
allow: [
{
extensions: {
pp_cli_auth: "true"
}
},
terraform
]
sort-order: 500
name: "puppetlabs cert status"
},
The key there is turning the allow section into an array and adding
terraform to it. That tells the puppetserver that anything with a cert
matching the name terraform
will be allowed to use this API endpoint
as specified. You’ll probably have to restart your puppetserver for
the change to take effect.
Next, you need to create the certificate on your puppetserver (I’m assuming you have autosigning turned on; if not, you probably know what to do):
GTU:S sdoyle@puppet-01:~$ sudo puppetserver ca generate --certname terraform
Successfully saved private key for terraform to /etc/puppetlabs/puppet/ssl/private_keys/terraform.pem
Successfully saved public key for terraform to /etc/puppetlabs/puppet/ssl/public_keys/terraform.pem
Successfully submitted certificate request for terraform
Successfully saved certificate for terraform to /etc/puppetlabs/puppet/ssl/certs/terraform.pem
Certificate for terraform was autosigned.
Create a certificates
directory in your terraform codebase, and copy
the following files from the paths above:
- Copy the private key (
*/private_keys/terraform.pem
) tocertificates/terraform-private.pem
- Copy the certificate (
*/certs/terraform.pem
) tocertificates/terraform-cert.pem
- Place a copy of the CA cert (usually
/etc/puppetlabs/puppet/ssl/certs/ca.pem
) incertificates/ca.pem
Now you have the authentication data you need for puppet in your
terraform codebase. All you need to do now is set up a destroy
provisioner to execute the necessary command to remove the host. The
null
resource provider is particularly good for this in combination
with curl
:
resource "null_resource" "puppet_certificate_cleaner" {
triggers = {
vm = "${vsphere_virtual_machine.vm.id}"
}
provisioner "local-exec" {
when = "destroy"
command = "curl -s -o- --noproxy '*' -X DELETE -H 'Accept: pson' https://puppet:8140/puppet-ca/v1/certificate_status/${var.host_fqdn} --cacert certificates/ca.pem --key certificates/terraform-private.pem --cert certificates/terraform-cert.pem"
}
}
The triggers{}
block ties the resource to the ID of the virtual
machine. Now, whenever the virtual machine is either rebuilt or
destroyed, it will automatically execute the curl command to delete
the certificate. Piece of cake.
You can alteratively add this as a direct local-exec destroy
provisioner in your virtual machine; I chose to do it this way because
it helps me keep all my puppet integration pieces in a single
puppet.tf
file in the module I use for creating VMs.
That’s All Folks
And that’s it. That’s all you really need for basic puppet integration with terraform. This can be easily expanded into even more complex things; all the pieces are here. In my case, I have all this bundled up into a module that I can use when I create virtual machines, so it’s automatically present for every host I build.
And now I can fully build my hosts with ease; I just type all the specs into terraform and puppet.