How to Execute EC2 User Data Script using Terraform

How to Execute EC2 User Data Script using Terraform

How to Execute EC2 User Data Script using Terraform

Dear reader, I hope you are doing well. In one of my previous posts, I shared with you how to execute EC2 user data script using CloudFormation. Today, I’ll explain and demo how to execute EC2 user data script using Terraform.

As you might already know, EC2 user data script, lets you bootstrap your EC2 instance by executing some commands(that you specify) dynamically after your instance is booted.

This comes in really handy when you want to automate some configuration tasks or simply want to run some script once your instance starts.

Let’s start by learning a bit more about EC2 user data.

Shall we start?

Alright !! time to begin …

Don’t want to miss any posts from us? join us on our Facebook group, and follow us on Facebook, Twitter, LinkedIn, and Instagram. You can also subscribe to our newsletter below to not miss any updates from us.

Suggested Read:

EC2 User Data Overview

In very simple terms if I say, user data is user data/commands that you can specify at the time of launching your instance. These data/command executes after your EC2 instance starts.

And, amazing thing is, that you don’t even need to SSH into your EC2 instance. All you need is just provide the script in the user data section and it will be executed once your instance boots up.

Let’s see an example-

You want to create a file log.txt in the dev folder as soon as your instance starts. To achieve this, you can specify below user data.

#!/bin/bash
touch /dev/log.txt

For your information, you can pass two types of user data to your EC2 instance-

  1. shell scripts
  2. cloud-init directives

Note: User data scripts run as the root user so you don’t need to specify sudo with your commands.

EC2 User Data and Terraform

When you launch an EC2 instance using terraform, you can specify your user data like the below snippet-

resource "aws_instance" "demo-instance" {
..................
..................

user_data = "....."

..................
..................
}

When we were doing this with CloudFormation, we needed to base64-encode our user data script. However, using terraform, you can simply provide your user data script in heredoc string format and terraform will take care of the rest.

This is what the heredoc format looks like –

user_data = <<EOF
  #!/bin/bash
  touch /dev/log.txt
EOF

Moreover, if at all you have your user data script in base64-encoded format, then instead of using user_data = “…..” parameter, use user_data_base64 parameter. Additionally, you can check the official documentation for more details.

There is another property that gets used along with user_data and user_data_base64. It is user_data_replace_on_change parameter.

It is an optional parameter and defaults to false. Once set to true, it will trigger destroy and recreate of your EC2 instance.

Usecase that We’ll Implement Today

In all the other posts related to EC2 user data, we install an apache web server on an EC2 instance. This post is no different. We will install an apache web server on our instance using Terraform. We have a script handy for that. Have a look at the script below in case you need that.

EC2 User Data to Install Apache Web Server

#!/bin/bash
yum update -y
yum install -y httpd.x86_64
systemctl start httpd.service
systemctl enable httpd.service
echo ?Hello World from $(hostname -f)? > /var/www/html/index.html

In case you want to learn more about apache web server installation on EC2, feel free to check my previous post.

What is Terraform and How to use it to Create a Resource on AWS?

When it comes to creating and managing resources on AWS, there are quite a few ways for example AWS console, CLI, CloudFormation etc.

Terraform is also one of them. It is a very popular and open-source Infrastructure as Code(IaC) tool by HashiCorp.

  • You can use it to provision, update and version your infrastructure in an efficient manner
  • You declare your required infrastructure in a configuration file and terraform creates it in the correct order.
  • Configuration files are in a human-readable format using HashiCorp Configuration Language(HCL) or even JSON is supported.
  • Terraform is Cloud Agnostic and supports numerous cloud providers like AWS, Azure, GCP etc.

How to Create a Resource on AWS using Terraform

Unlike CloudFormation, you’ll have to install Terraform in your system before you can use it to create a resource like an EC2 instance.

Once installed, you create your configuration file(file-name.tf – they have .tf extension), and use the below set of commands to deploy your resources.

$ terraform init
$ terraform plan
$ terraform apply
$ terraform destroy

I highly recommend you check my step-by-step guide to help you get started with terraform on AWS correctly. Here is the link to the post-Getting Started With Terraform on AWS In Right Way

If you are reading this line, I assume you already know how to deploy a resource on AWS using Terraform.

Alright, let’s get started to create an EC2 instance with user data.

Prerequisite:

Steps to Execute EC2 User Data Script using Terraform

  1. Create a Working Directory/Folder
  2. Create your EC2 Instance Configuration File
  3. Initialize Your Directory to Download AWS Plugins
  4. Plan and Deploy
  5. Provide EC2 User Data script using File in Terraform
  6. Validate EC2 User Data Script Execution

Step 1: Create a Working Directory/Folder

First of all, create a folder or working directory in which you’ll keep your Terraform configuration file. Basically, this will be your working directory to create your resource using Terraform and can contain other files such as variable files etc.

How to Execute EC2 User Data Script using Terraform 1

Step 2: Create your EC2 Instance Configuration File

Navigate inside the folder and create your configuration file. You can name it as per your wish, but to keep things simple, I will name it main.tf

I have started with a provider declaration specifying that we are using an AWS provider. Additionally, it specifies the credential profile that will be used to authenticate to AWS and the region in which resources are to be created by default

terraform {
  required_providers {
    aws = {
      source  = "hashicorp/aws"
      version = "~> 3.27"
    }
  }

  required_version = ">= 0.14.9"
}

#Provider profile and region in which all the resources will create
provider "aws" {
  profile = "default"
  region  = "ap-south-1"
}

Now let’s add an EC2 instance with user data and a security group to allow inbound and outbound traffic. Since we are installing an Apache web server and we’ll be testing this by calling the public IP from the browser. So, security group, config is necessary. After adding these resources this is what our configuration file looks like-

terraform {
  required_providers {
    aws = {
      source  = "hashicorp/aws"
      version = "~> 3.27"
    }
  }
}

provider "aws" {
  profile = "default"
  region  = "ap-south-1"
}


#Variable Declarations
variable "ami-mumbai" {
  type = string
  default = "ami-06489866022e12a14" # ap-south-1
}

variable "key-name" {
  type = string
  default = "MyDemoEC2eyPair"
}

#EC2 instance using UserData
resource "aws_instance" "demo-instance" {
	ami = var.ami-mumbai
	instance_type = "t2.micro"
	key_name = var.key-name
	vpc_security_group_ids = [aws_security_group.allow_port80.id]
	user_data = <<EOF
		#!/bin/bash
		yum update -y
		yum install -y httpd.x86_64
		systemctl start httpd.service
		systemctl enable httpd.service
		echo ?Hello World from $(hostname -f)? > /var/www/html/index.html
	EOF
}

#Security Group Resource to open port 80 
resource "aws_security_group" "allow_port80" {
  name        = "allow_port80"
  description = "Allow Inbound Traffic on Port 80"

  ingress {
    description      = "Port 80 from Everywhere"
    from_port        = 80
    to_port          = 80
    protocol         = "tcp"
    cidr_blocks      = ["0.0.0.0/0"]
    ipv6_cidr_blocks = ["::/0"]
  }
  
  egress {
    from_port        = 0
    to_port          = 0
    protocol         = "-1"
    cidr_blocks      = ["0.0.0.0/0"]
    ipv6_cidr_blocks = ["::/0"]
  }
}

output "public_ip" {
  value = aws_instance.demo-instance.public_ip
}

Please note that in above configuration we have defined both inbound and outout rules. It might look a bizzare to you. The reason is although By default, AWS creates an ALLOW all rule on security groups, while creating a new Security Group inside a VPC, Terraform will remove this default rule. As a result your apache server will not get installed due to error in network connectivity. So make sure to define both.

Step 3: Initialize Your Directory to Download AWS Plugins

Open the command prompt or terminal and navigate to your working directory.

You only do this step once per folder/directory. This basically means you are downloading relevant codes/plugins for your mentioned provider which in our case is AWS.

terraform init

once you hit enter,

How to Execute EC2 User Data Script using Terraform 2

Your wording directory gets initialized with the provider-related code and is ready to deploy an EC2 resource.

Step 4: Plan and Deploy

Save your configuration file in a file with .tf extension. Before using the above file make sure to replace the variable like ami and keypair with your own.

By now, the configuration file is created and the directory is initialized. That means we are ready to deploy our ec2 with the user data script.

At this stage, if you want, you can run the command terraform plan to see what’s actually being created.

terraform plan

Using terraform plan shows what you are going to create-

However, to keep things simple, I just do terraform apply. Ideally, terraform runs terraform plan every time you hit the command terraform apply. So why do something twice?

Makes sense?

I am sure, it does 🙂

Once you review the plan and confirm yes then only resources will be created. Terraform will look for .tf file and show you what’s being created. Review the output of the plan and if all is fine say yes to the ec2 instance creation.

Once you type terraform apply and hit enter, within a few seconds, only your EC2 instance along with the security group gets created.

How to Execute EC2 User Data Script using Terraform 3 updated

The instance is successfully created, note the public Ip from here. We’ll use it while validating the user data execution. For now, we will see one more way in which you can specify your ec2- user data script.

Step 5: Provide EC2 User Data script using File in Terraform

In the above section, we simply put up the user data script in a heredoc string format. However, you can use a file as well.

Paste the content of the user data script in a file named ec2-user-data.sh. After that, change your user_data parameter to use the file instead of the string.

Here is how you can do that-

user_data = "${file("ec2-user-data.sh")}"

the file(path) functions read the user-data script file given in the path and return as a string

Note: The path is relative here.  Simple ec2-user-data.sh means that the tf and the sh files are present on the same directory level.

Configuration file to execute EC2 User Data from a File

terraform {
  required_providers {
    aws = {
      source  = "hashicorp/aws"
      version = "~> 3.27"
    }
  }
}

provider "aws" {
  profile = "default"
  region  = "ap-south-1"
}


#Variable Declarations
variable "ami-mumbai" {
  type = string
  default = "ami-06489866022e12a14" # ap-south-1
}

variable "key-name" {
  type = string
  default = "MyDemoEC2eyPair"
}

#EC2 instance using UserData
resource "aws_instance" "demo-instance" {
	ami = var.ami-mumbai
	instance_type = "t2.micro"
	key_name = var.key-name
	vpc_security_group_ids = [aws_security_group.allow_port80.id]
	user_data = "${file("ec2-user-data.sh")}"
}

#Security Group Resource to open port 80 
resource "aws_security_group" "allow_port80" {
  name        = "allow_port80"
  description = "Allow Inbound Traffic on Port 80"

  ingress {
    description      = "Port 80 from Everywhere"
    from_port        = 80
    to_port          = 80
    protocol         = "tcp"
    cidr_blocks      = ["0.0.0.0/0"]
    ipv6_cidr_blocks = ["::/0"]
  }
  
  egress {
    from_port        = 0
    to_port          = 0
    protocol         = "-1"
    cidr_blocks      = ["0.0.0.0/0"]
    ipv6_cidr_blocks = ["::/0"]
  }
}

output "public_ip" {
  value = aws_instance.demo-instance.public_ip
}

Step 6: Validate EC2 User Data Script Execution

EC2 user data script runs once the instance is booted. We have our public IP, Let’s try to hit the public IP from the browser to see if Apache is up and running with our custom message.

If you are getting site can not be reached error message, it might be cause we just allowed port 80 ie HTTP and you are trying to access this on HTTPS. So type manually http://yourip and hit enter. You should be good to go. Also, give some time for the user script to finish running before you test.

Clean Up

Finally, if you are doing this exercise for learning purposes, you can clean up by destroying the created resource.

terraform destroy

Type yes, and hit enter

Once you hit enter, your resources get destroyed. Once done, this is how you see the complete destruction message.

How to Execute EC2 User Data Script using Terraform 4 updated

As you can see resources are deleted and IP is released 🙂 You can sleep peacefully without worrying about the cost now.

PS: By the way, you can do one more thing, You can set a cost budget on your AWS account to protect yourself against unwanted costs. Here is how you can do that: How to Create a Cost Budget in AWS to Keep Your AWS Bills in Check

Conclusion:

In this post, we learnt how to execute EC2 user data script using Terraform.

We learnt-

  • About user data and how it lets you bootstrap instance
  • how to specify user data correctly
  • Specifying user data in string and file format
  • validating user data execution

Well, that was my take onHow to Execute EC2 User Data Script using Terraform“. Please feel free to share your feedback.

Enjoyed the content?

Subscribe to our newsletter below to get awesome AWS learning materials delivered straight to your inbox.

If you liked reading my post, you can motivate me by-

  • Adding a comment below on what you liked and what can be improved.
  • Follow us on FacebookTwitterLinkedInInstagram
  • Share this post with your friends and colleagues.

Suggested Read:

Leave a Reply

Your email address will not be published. Required fields are marked *