How to Create Multiple S3 Buckets using Terraform
Dear Reader, I hope you are doing well. In one of my previous tutorials, I talked about creating an S3 bucket on AWS using terraform. In this tutorial, I am going to cover how to create multiple S3 buckets using Terraform.
So are you ready??
Alright !!!
Background
While developing and deploying your application using AWS, there are times when we have the same or similar multiple buckets to support our website or other works. It doesn’t make sense to create them multiple times or repeat the configuration.
Therefore, this post will discuss different ways in which you can create multiple s3 buckets using Terraform in an efficient manner. Not to forget, we will try to keep our code as clean as possible for better maintainability.
Let’s get started 🙂
Don’t want to miss any posts from us? join us on our Facebook group, and follow us on Facebook, Twitter, LinkedIn, and Instagram. You can also subscribe to our newsletter below to not miss any updates from us.
Prerequisite
- An active AWS account: See How to Setup AWS Free Tier Account in Right Way
- Terraform Installed in Your System: See How to Install Terraform in Your System
- Terraform Credentials Setup: Get Started
- Basic Terraform Knowledge
- A Simple Editor or IDE like VS Code
Assumption: Before you can use this tutorial to create an s3 bucket resource, you should know how to create a resource on AWS using Terraform. If you are a beginner I highly recommend you to read my previous post on Getting Started With Terraform on AWS In Right Way. Once you have read the post, you are ready to move ahead with this post further.
Simplest Way to Create Multiple S3 Buckets using Terraform
In the last tutorial you saw, you can create a simple s3 bucket using Terraform with the below code –
#Resource to create s3 bucket
resource "aws_s3_bucket" "demo_bucket" {
bucket = "BucketName"
}
That’s simple.
What would you do if you have to create 3 buckets?
The simple thing that comes to mind is – simple, just specify the resource 3 times like-
#Resource to create s3 bucket
resource "aws_s3_bucket" "demo_bucket1" {
bucket = "BucketName1"
}
#Resource to create s3 bucket
resource "aws_s3_bucket" "demo_bucket2" {
bucket = "BucketName2"
}
#Resource to create s3 bucket
resource "aws_s3_bucket" "demo_bucket3" {
bucket = "BucketName3"
}
Awesome !!!
Now can you create 300 S3 buckets for me, please?
Well, I can feel the pain of repeating it 300 times. And not to mention the nightmare you are going to face while managing those 300 resources.
You might be thinking that there must be a way.
And you are right.
Terraform provides two different ways(at least) in which you can achieve this efficiently.
- Using count meta-argument
- Using for_each meta-argument
Note: Meta-arguments are special constructs in terraform available in resource & module that helps us in achieving certain requirements for example doing something on loop. You can read more about it here.
Let’s get our hands dirty…
Steps to Create Multiple S3 Buckets using Terraform
- Provider Declaration
- Initialize Your Project Directory
- Prepare Configuration file to Create Multiple S3 Buckets in Terraform
- Deploy the Configuration to Create Multiple Buckets using Terraform
Step 1: Provider Declaration
Before you can create an AWS S3 bucket on AWS using Terraform, you need to create a configuration file and specify what you want.
Create a project and a main.tf file.
Start with the provider declaration(AWS) and how Terraform will authenticate with AWS. As you can see I have specified a profile which is created by when you do the AWS CLI setup in your system.
This is how It looks like-
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 3.27"
}
}
}
provider "aws" {
profile = "default"
region = "ap-south-1"
}
As you can see, we have specified that we’ll be working with the AWS provider. This is a necessary step and without this code, you won’t be able to work with AWS.
Step 2: Initialize Your Project Directory
You have specified provider declaration. It’s time for you to tell Terraform to download provider-specific code/plugins that will be needed while working with AWS.
If you try to run terraform plan or terraform apply command without doing terraform init, you will get the below error-
As you can it says provider required and suggests running terraform init.
Let’s do that.
Once you run terraform init, AWS-specific plugins get installed and you are ready to create your first resource using Terraform.
Important Note: You only need to do this once. Once the provider-specific code is downloaded, you need not run this command again and again unless you are changing the provider version or something that needs to pull extra code.
Step 2: Prepare the Configuration file to Create Multiple S3 Buckets in Terraform using count
Now that we are ready, let’s create a terraform example configuration file to create multiple S3 buckets on AWS. As I said already, we can use count or for_each meta argument for this. Let’s see them one by one.
2.1. Create Multiple S3 Buckets in Terraform using count
In Terraform, the count is a meta-argument that you can use in your resource or module. It accepts a whole number and creates many instances of resources or modules.
Let’s simplify this a bit. Ideally how count works is, if you can create one single resource using the below code-
#Resource to create s3 bucket
resource "aws_s3_bucket" "versioning_bucket" {
bucket = "demo-ck-18th"
}
Then you can create 5 resources by specifying count = 5
#Resource to create s3 bucket
resource "aws_s3_bucket" "versioning_bucket" {
count = 5
bucket = "demo-ck-18th"
}
Looks simple?
Hell Yeah 🙂
Okay. But the problem is, AWS S3 bucket names can’t be the same. Once you have created a bucket, you or anyone else can’t create the bucket with the same name. You can read more about it here: This is Why AWS S3 Bucket Name is Unique
If you try doing so, after the first bucket creation, all subsequent ones will fail with the below error-
Error creating S3 bucket: BucketAlreadyOwnedByYou: Your previous request to create the named bucket succeeded and you already own it.
What you can do is, create a variable of type list and provide the names of all buckets you want to create.
For example-
#Variable Declaration
variable "bucket_list" {
type = list
default = ["demo-ck-1", "demo-ck-2", "demo-ck-3"]
}
#Resource to create s3 bucket
resource "aws_s3_bucket" "demo_bucket" {
count = length(var.bucket_list)
bucket = var.bucket_list[count.index]
}
As you can see above, the count parameter takes the value of the length of the list. And the bucket name is set using var.bucket_list[count.index] or list[index] syntax which is familiar to us to access list or array elements.
This will help you create a dynamic number of S3 buckets using Terraform efficiently in a much cleaner way.
2.2 Create a list of buckets Create Multiple S3 Buckets in Terraform using for-each
for_each is a meta-argument that you can use in a resource, module or inline block of a resource. It works similarly to count in the way that it creates multiple instances of resources based on a set or map.
So this is how the code creates multiple buckets in Terraform using for_each.
#Variable Declaration
variable "bucket_list" {
type = list
default = ["demo-ck-1", "demo-ck-2", "demo-ck-3"]
}
#Resource to create s3 bucket
resource "aws_s3_bucket" "demo_bucket" {
for_each = toset(var.bucket_list)
bucket = each.key
}
Explanation:
Here also we have created a variable bucket_list with all the bucket names that we want to create.
Then we specified for_each = toset(var.bucket_list)
The
for_each
meta argument accepts a map or a set of strings, and creates an instance for each item in that map or set which is in our case s3 bucket.
Since for_each only accepts map or set, we have converted our list to map using the built-in function toset.
And then, we specified bucket = each.key to use the bucket name from the variable. In the case of a set each.key & each.value are the same. So ideally you can use any of them.
2.3. Final Code Example to Create Multiple S3 Buckets using Terraform
Based on what option you are choosing from above, your final code looks a bit different from ours. In between count and for_each, for_each is the recommended way and I am going ahead with for_each.
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 3.27"
}
}
}
provider "aws" {
profile = "default"
region = "ap-south-1"
}
#Variable Declaration
variable "bucket_list" {
type = list
default = ["demo-ck-1", "demo-ck-2", "demo-ck-3"]
}
#Resource to create s3 bucket
resource "aws_s3_bucket" "demo_bucket" {
for_each = toset(var.bucket_list)
bucket = each.key
}
Step 3: Deploy the Configuration to Create Multiple Buckets using Terraform
We are ready with our configuration. Time to deploy this terraform configuration.
Open a terminal in the folder where you have your Terraform configuration file.
Initialize the directory with AWS necessary plugins by running terraform init
Run terraform apply
Our resources are successfully created, time to validate it.
Step 4: Validate Bucket Creation
Congratulations !!! you have successfully created multiple s3 buckets using Terraform.
As you can see in the above screenshot, three buckets are created.
However, if you wish you can check the same in the AWS console as well.
Clean Up
Finally, if you are doing this exercise for learning purposes, you can clean up by destroying the created resource.
terraform destroy
Type yes, and hit enter
Once you hit enter, your resources get destroyed. You can sleep peacefully without worrying about the cost now.
PS: By the way, you can do one more thing, You can set a cost budget on your AWS account to protect yourself against unwanted costs. Here is how you can do that: How to Create a Cost Budget in AWS to Keep Your AWS Bills in Check
Conclusion:
In this post, we learnt how to create multiple s3 buckets using Terraform. We learnt two different ways(count and for_each) that can be used to create multiple resources efficiently without repeating your code.
This helps us keep our code clean and more maintainable.
Please let me know in the comment if you found the post useful.
Enjoyed the content?
Subscribe to our newsletter below to get awesome AWS learning materials delivered straight to your inbox.
If you liked reading my post, you can motivate me by-
- Adding a comment below on what you liked and what can be improved.
- Follow us on Facebook, Twitter, LinkedIn, Instagram
- Share this post with your friends and colleagues.
Suggested Read