Using ChatGPT and terraform

Soham Dutta
5 min readDec 9, 2022

--

Probably you’ve already heard of and have seen examples of ChatGPT, an amazing (especially since it’s essentially in its infancy) web-based AI program that users interact with in plain English (or any modern language) to get AI-generated responses. The questions you ask can be simple and straightforward or even complex and theoretical; they can be anything from writing lyrics to a new (or existing!) song, screenplay scenes for a movie, and in the style of famous persons.

Those more technical are using ChatGPT to write, debug, and optimize code. I, personally, have used it to write docstrings for some Python functions I’ve written, and most of it has been good!

But what about Infrastructure as Code (IaC)? Can it work with Terraform? Actually, yes, it can.

One of the potential pain points I’ve seen in the past is converting Terraform code that had been written for AWS to work in Azure or GCP. Terraform is “agnostic” in that it works with all of those services, but the code is not portable — you can’t just switch providers and it will “just work.” You have to understand the provider differences and, in the case of virtual instances, what an equivalent instance type name would be in another provider.

ChatGPT to the rescue (mostly).

As a basic example, I threw the following code to ChatGPT with the following instructions: “Convert the following Terraform code to work in Azure:”:

data "aws_ami" "my_base" {
most_recent = true
owners = ["self"]
  filter {
name = "name"
values = ["my-base-*"]
}
}
resource "aws_instance" "my_instance" {
ami = data.aws_ami.my_base.id
iam_instance_profile = aws_iam_instance_profile.my_instance.name
instance_type = "t2.micro"
key_name = var.ec2_key_name
monitoring = false
subnet_id = data.terraform_remote_state.vpc.outputs.public_subnet_a
vpc_security_group_ids = [
aws_security_group.instances.id
]
root_block_device {
volume_type = "gp3"
volume_size = "8"
}
provisioner "local-exec" {
command = "sleep 180;ANSIBLE_HOST_KEY_CHECKING=False ansible-playbook -u ec2-user --private-key ${var.private_key_file} -i '${self.public_ip},' ${path.cwd}/ansible/my_instance.yml"
}
}

Here’s ChatGPT’s response, which came seconds later:

Here is the AWS Terraform code rewritten for Azure:
data "azurerm_image" "my_base" {
most_recent = true
publishers = ["self"]
filter {
name = "name"
values = ["my-base-*"]
}
}
resource "azurerm_virtual_machine" "my_instance" {
image_reference = data.azurerm_image.my_base.id
vm_size = "Standard_B1s"
key_name = var.ec2_key_name
monitoring = false
subnet_id = data.terraform_remote_state.vpc.outputs.public_subnet_a
vpc_security_group_ids = [
azurerm_security_group.instances.id
]
root_block_device {
disk_size_gb = "8"
}
provisioner "local-exec" {
command = "sleep 180;ANSIBLE_HOST_KEY_CHECKING=False ansible-playbook -u ec2-user --private-key ${var.private_key_file} -i '${self.public_ip},' ${path.cwd}/ansible/my_instance.yml"
}
}

That … would (*Edit: sort of) work! It did not complain that it didn’t know about the remote state or the variables, but that would just complicate what I asked it anyway. It also assumed that the user on the new instance was still named ec2-user, but I didn’t expect it to understand that, especially since it was nested inside the provisioner block.

Next I asked “Write the same code to work in GCP.” Here’s that response:

Here is the AWS Terraform code rewritten to work in GCP:
data "google_compute_image" "my_base" {
most_recent = true
owners = ["self"]
filter {
name = "name"
values = ["my-base-*"]
}
}
resource "google_compute_instance" "my_instance" {
image = data.google_compute_image.my_base.id
machine_type = "f1-micro"
metadata = {
ssh-keys = "${var.user}:${file(var.public_key_file)}"
}
network_interface {
subnetwork = data.terraform_remote_state.vpc.outputs.public_subnet_a
}
service_account {
scopes = ["https://www.googleapis.com/auth/cloud-platform"]
}
boot_disk {
initialize_params {
image = data.google_compute_image.my_base.id
}
}
provisioner "local-exec" {
command = "sleep 180;ANSIBLE_HOST_KEY_CHECKING=False ansible-playbook -u ${var.user} --private-key ${var.private_key_file} -i '${self.network_interface.0.access_config.0.nat_ip},' ${path.cwd}/ansible/my_instance.yml"
}
}

Again, that (*Edit: sort of) works!

Since it seems to be pulling in the equivalent instance family/type from each provider, I wondered whether it actually was correct. Does ChatGPT know which instance types match the other providers?

“What is the equivalent of an AWS t3.micro in Azure and GCP?”

In Azure, the equivalent of an AWS t3.micro instance is the Standard_B1s instance type. In GCP, the equivalent of an AWS t3.micro instance is the f1-micro machine type. These instance types and machine types offer similar performance and specs, including vCPU and memory, as the AWS t3.micro instance.

OK, good response! But can it give me a complete list?

“List all of the AWS t3 instance types and their equivalent types in Azure and GCP in tabular format.”

Here is a table listing the AWS t3 instance types and their equivalent types in Azure and GCP:

| AWS t3 Instance Type | Azure Equivalent | GCP Equivalent |
|----------------------|------------------|----------------|
| t3.nano | Standard_B1s | f1-micro |
| t3.micro | Standard_B1s | f1-micro |
| t3.small | Standard_B1s | f1-micro |
| t3.medium | Standard_B1ms | n1-standard-1 |
| t3.large | Standard_B2s | n1-standard-2 |
| t3.xlarge | Standard_B2ms | n1-standard-4 |
| t3.2xlarge | Standard_B4ms | n1-standard-8 |

Note that the Azure and GCP equivalents listed above are only approximate, and there may be some differences in the exact specs and performance of the different instance types and machine types. It is always best to consult the official documentation and compare the specific specs of the instance types and machine types to ensure that you are choosing the best option for your needs.

This is the early days of ChatGPT. Can you imagine if/when they license this technology out to Hashicorp to do this conversion for you? You might be able to get true cloud agnosticity with your IaC, allowing you to quickly move and provision your resources in another provider, without having to spend hours upon hours trying to figure out what it should look like.

Not perfect, but ChatGPT is just a baby. Give ’er time.

* While the Terraform code it provided is technically correct or at least provides a general guideline of how the provided Terraform might look in the other provider, it may be using an older version of the provider. There are a lot more considerations such as network interfaces that may need to be addressed.

I tried to correct ChatGPT on the Azure creation, but it was stubborn:

“Shouldn’t image_reference instead be storage_image_reference?”

The 'image_reference' field in the Terraform code is correct and should not be changed to 'storage_image_reference'.

In Azure Resource Manager (ARM), the 'image_reference' field is used to specify the source image for a virtual machine. It is typically used with the 'azurerm_image' data source to reference a specific image in Azure Marketplace.

The 'storage_image_reference' field, on the other hand, is used to specify the location of a custom image in Azure Storage. It is typically used with the 'azurerm_virtual_machine_extension' resource to add a custom image to a virtual machine.

In this case, the code is using the 'azurerm_image' data source to reference a specific image in Azure Marketplace, so the 'image_reference' field is the correct one to use.

So, relax — our jobs are safe (for now) …

Original article at https://rawdatum.com/posts/7th-piece/

--

--

Soham Dutta
Soham Dutta

Written by Soham Dutta

Your friendly neighbourhood SRE