Deploying a Website on AWS ? Let’s use Terraform

Salik Sayyed
12 min readJul 25, 2020

Here we are going to launch a static website with its code written in the ec2 instance web server itself.

As a general rule static content of file like images needs to be made available from S3 Bucket. Because it's more economically feasible. But at the same time, we need it to be more secure and faster in delivering content.

To deliver content faster from the s3 AWS CloudFront comes in handy.

What is AWS CloudFront?

It is a CDN (Content Delivery Network ) service. As the name suggests its made to deliver content with low latency, high transfer speeds, all within a developer-friendly environment.

CloudFront is super fast in delivering content because it stores caches of static data on edge locations across the world. So whenever some user /Client connects it connects it provides it the content from the nearest and faster available edge location. While doing all that the DNS of the CloudFront remains the same.

Hence using cloud front for content delivery is always a good idea.

What is EFS?

Amazon Elastic File System (Amazon EFS) provides a simple, scalable, fully managed elastic NFS file system for use with AWS Cloud services and on-premises resources. It is built to scale on-demand to petabytes without disrupting applications, growing and shrinking automatically as you add and remove files, eliminating the need to provision and manage capacity to accommodate growth. Basically its cloud storage as a normal file system and the good thing is we are charged for what we use unlike EBS volumes also accessible from any region. Best for cross-region work.

What will we be doing?

  1. Create a Security group which allows the port 80.
  2. Launch EC2 instance.
  3. In this Ec2 instance use the existing key or provided key and security group which we have created in step 1.
  4. Launch one Volume using the EFS service and attach it in your vpc, then mount that volume into /var/www/HTML
  5. The developer has uploaded the code into GitHub repo also the repo has some images.
  6. Copy the github repo code into /var/www/html
  7. Create an S3 bucket, and copy/deploy the images from GitHub repo into the s3 bucket and change the permission to public readable. 8 Create a Cloudfront using s3 bucket(which contains images) and use the Cloudfront URL to update in code in /var/www/HTML

All of the above will be done using Terraform.

Here's the diagram for explanation.

So For specific job tasks its always a good practice to have a modular approach in creating terraform code. It helps in later changes and other stuff.

So File structure I have used is as below:

.
├── _clf #cloudfront
│ ├── cldfront.tf
|
├── filesystem #for file system and instance
│ ├── ec2instance.tf
│ └── efsfile.tf
├── _s3 #for s3 bucket
│ ├── s3bucket.tf
|
└── main.tf #main terraform code

Every terraform code starts with main.tf file and for aws a resource defining profile for aws cli used is to be defined. Here I have used AWS educate (Becuase it costs nothing). In aws educate there is session id as well which renews after every hour. So inorder to write this change we can change the credentials file of aws cli.

Aws cli credentials file is located at ..User/.aws/credentials . Just copy pasted the credentials provided by Vocurum.

If you are using real costly :{ aws then no need to go for all these overheads.

main.tf required thing is provider “aws” for terraform.

provider "aws" {
region = "us-east-1"
shared_credentials_file = "../../.aws/credentials"
profile = "default"
}
#...will be continued

Here we need to specify profile and region only as required parameter but since as explained above I used shared credentials file as well.

Now first module we will create S3 bucket. As it is independent on any other resource.

s3bucket.tf looks like below.

resource "aws_s3_bucket" "imagebucketsalik" {
bucket = "image-bucket-salik-548495651"
acl = "private"
tags = {
Name = "imgs"
Environment = "prod"
}
}
output "s3_object" {
value = aws_s3_bucket.imagebucketsalik.bucket_regional_domain_name
}
resource "null_resource" "m1" {
depends_on = [aws_s3_bucket.imagebucketsalik]
provisioner "local-exec" {
command = "git clone https://github.com/SalikSayyed/staticimages.git"
}
}
resource "aws_s3_bucket_object" "file-uploading" {
bucket = aws_s3_bucket.imagebucketsalik.id
key = "mountain.jpg"
source = "./staticimages/mountain.jpg"
acl = "public-read"
}

Here access control for entire bucket is set private. So there is no public dns for accessing entire s3 bucket content.

This kind of strategy is helpful for protecting our data without worrying about its password protection and all other thing. The most secure thing on network is the one which is not accessible from network. Thats the reason why it is helpful.

Here I am cloning the repo from github into the local system and then uploading the content into the s3 bucket.

Here I have uploaded the image from local system but we can also upload image from github. Just in the source file needed to write the URL for the image file on github. Both are okay based on the use case. Here I like to store on my local system as well.

Now in every upload of file we need to specify the key for that file. That how others access the file. Hence key attribute is given to the name of the file itself. And of course in modular approach whatever happened inside the file we need to give certain parameters to the main file. One of the parameter here is s3 bucket name domain_name hence I have made an output block.

This completes our module.s3.s3bucket

Now next part is creating CloudFront for accessing the content of the s3 bucket.

cldfront.tf

variable "name_s3" {
type = string
}
resource "aws_cloudfront_distribution" "cld_distro" {
depends_on = [var.name_s3]
enabled = true
origin {
domain_name = var.name_s3
origin_id = "just-imagesfor-website"
}
default_cache_behavior {
allowed_methods = [
"DELETE",
"GET",
"HEAD",
"OPTIONS",
"PATCH",
"POST",
"PUT"]
cached_methods = [
"GET",
"HEAD"]
target_origin_id = "just-imagesfor-website"
viewer_protocol_policy = "allow-all"
forwarded_values {
query_string = false
cookies {
forward = "none"
}
}
}
restrictions {
geo_restriction {
restriction_type = "none"
}
}
viewer_certificate {
cloudfront_default_certificate = true
}
}
output "cld_domain" {
value = aws_cloudfront_distribution.cld_distro.domain_name
}

The variable created name-s3 is needed for use later in the cloud front distribution resource. This type of approach comes from the modular approach in terraform. This is to get the name of s3 bucket created in the previous file.

For creating cloud front distribution we need to use resource aws_cloudfront_distribution for terraform.

This resource has attributes like origin. Which signifies the origins domain name in our case will be the S3 bucket. Hence given the dns name of that s3 bucket. Note : cloudfront automatically modifies acl of the S3 bucket to access its contents. Origin specified should have unique id . This is kind of tagging nothing else.

Another important attribute is default_cache_behaviour. In this we need to define the allowed methods to give to cloudfront client. Cached methods is the way how cloudfront will send the request of caching and store the cache.

target_origin_id attribute is the attribute to specify the origin through which cache is to be stored. In our case this will be the origin id defined above.

There are other attributes like restrictions in aws_cloudfront_distribution to restrict some of the users. One of the restrictions is geo_restriction in here if we put something in whitelist attribute like not something country code. Then that country code only will be allowed to access cloudfront resource. Not from Amy other country.

Next thing to make is a security group. Instead of creating a new file I used main file itself to create a security group.

Security groups are needed for the AMis which we will be laughing. It basically specifies what will be connecting and what not. What can come in and what can go out and out of which port.

continued in main.tf file ...
resource "aws_security_group" "scgrp-ec2" {
name = "cld-security"
ingress {
// Rule for SSH
from_port = 22
protocol = "TCP"
to_port = 22
cidr_blocks = ["0.0.0.0/0"] //means everyone
}
ingress {
//For webapp http connection
from_port = 80
protocol = "TCP"
to_port = 80
cidr_blocks = ["0.0.0.0/0"]
}
ingress {
//for file system of nfs
from_port = 2049
protocol = "TCP"
to_port = 2049
cidr_blocks = ["0.0.0.0/0"]
}
egress {
from_port = 0
protocol = "-1"
to_port = 0
cidr_blocks = ["0.0.0.0/0"]

}
}
..continued later

Here for our webserver instance we will need SSH ,HTTP,NFS connections . Hence defined respective protocols …basically all are TCP!. Then cidr_block is the range of ips which we want to allow. This is useful in case of subnets and vpc connection and other stuff not needed here. We need to let everyone connect. Hence cidr_block will be 0.0.0.0/0. One thing to mention egress rule is the outgoing rule. And we dont want to disallow our instance to connect somewhere. So to set it all just use the block as used above.

Now our security group ,s3 bucket and cloudfront setup is written.

Moving on to launching ec2 instance as a webserver.

ec2instance.tf

variable "scgroup" {
type=string
}
variable "clddomain" {
type=string
}
resource "aws_instance" "ec2_instance" {
ami = "ami-0e9089763828757e1"
instance_type = "t2.micro"
key_name = "sss"
security_groups = [var.scgroup]
connection {
host = aws_instance.ec2_instance.public_ip
type = "ssh"
user = "ec2-user"
private_key = file("../../Downloads/sss.pem")
}
provisioner "file" {
content = "<body><h1>This is cloudfront image fetched from S3 and file system of EFS.</h1><image src='https://${var.clddomain}/mountain.jpg'></body>"
destination = "~/index.html"
}
tags = {
Name="ec2created"
}
}
output "public_ip_instance" {
value = aws_instance.ec2_instance.public_ip
}
resource "null_resource" "ec2_efs_mount" {
depends_on = [aws_efs_mount_target.ec2_mount_ny789,]
connection {
type = "ssh"
user = "ec2-user"
private_key=file("../../Downloads/sss.pem")
host = aws_instance.ec2_instance.public_ip
}provisioner "remote-exec" {
inline =[
"sudo cat ~/index.html",
"sudo yum install -y httpd php git",
"sudo yum install -y amazon-efs-utils",
"sudo service httpd start",
"sudo mkdir /var/www/html/efs-mount-point",
"sudo mount -t efs ${aws_efs_file_system.file_system.id}:/ /var/www/html/efs-mount-point/",
"sudo rm -rf /var/www/html/efs-mount-point/*",
"sudo mv ~/index.html /var/www/html/efs-mount-point/",
"sudo service httpd restart",
]
}
}

Here ignoring the variable explanation because its same as previous We need them hence we define them.

For launching aws instance we have to use aws_instance resource.

Attributes needed for launching aws instance is basically

ami — ami id of the instance we need to launch

instance_type — type of instance . I am using free-tier for practice :) hence t2-micro is awesome.

key_name — We can create our own key also with RSA algorithm in terraform but I used the one I had just to save time. And this is the parameter where we have to specify it.

security_groups — This attribute is list type. It takes the security groups to be used for the Instance. In my case only one which we defined above in main.tf file. Hence we need variable to pass the value.

connection — After running above attributes our ec2 instance is running. Hence connection attribute can work. So making a connection with the ec2-instance.

provisioner “file” — To create a file inside the ec2-instance or any system through terraform we need to use this provisioned. I used this to write an HTML code. Having the background image attribute src as The variable for cloudfront domain.

HTML code in file is as below.

"<body><h1>This is cloudfront image fetched from S3 and file system of EFS.</h1><image src='https://${var.clddomain}/mountain.jpg'></body>"

This will remove the overhead of changing the cloudfront domain everytime. We can fetch the same from Github as well.

Now important part here is to setup the EFS file mount and HTTP server in the ec2-instance.

Mounting the efs-file system is easy. Its same as mounting our DVD or there files. Just the path is replaced by efs_id/. But in order to translate the efs_id to required format we need to have aws efs file system agent installed inside.

Hence installing amazon-efs-utils

Mounting this into /var/www/html/some-created-folder

So in total commands to run are…

"sudo cat ~/index.html",
"sudo yum install -y httpd php git",
"sudo yum install -y amazon-efs-utils",
"sudo service httpd start",
"sudo mkdir /var/www/html/efs-mount-point",
"sudo mount -t efs ${aws_efs_file_system.file_system.id}:/ /var/www/html/efs-mount-point/",
"sudo rm -rf /var/www/html/efs-mount-point/*",
"sudo mv ~/index.html /var/www/html/efs-mount-point/",
"sudo service httpd restart",

After all this we need to give output of the running ec2 instances public_ip.

Hence output variable public_ip_instance variable is defined with value as aws_instance.ec2_instance.public_ip

Now our final piece of the puzzle is to launch efs resource.

For that below file of ..

efsfile.tf

resource "aws_efs_file_system" "file_system" {
depends_on = [var.scgroup,aws_instance.ec2_instance]
creation_token = "web-fs"
tags ={
Name = "Deployment"
}
}
resource "aws_efs_mount_target" "ec2_mount_ny789" {
depends_on = [aws_efs_file_system.file_system,]
file_system_id = aws_efs_file_system.file_system.id
subnet_id = aws_instance.ec2_instance.subnet_id
}
output "efsid" {
value = aws_efs_file_system.file_system.id
}

Here using aws_efs_file_system resource for creating the resource with required attributes are creation_token only.

Now we have to mount the created efs_file_system somewhere in some subnet. Hence there is separate resource for mounting target of aws_efs_mount_target. This resource has two attributes as required filesystem (which efs we need to mount) and subnet_id (where is our target located)

After creation of this module we need output of the efs file system hence the output variable.

Finally all modules are done just the integration on main file is needed.

continued.. main.tf file

continued....module "efs" {
source = "./filesystem"
scgroup = aws_security_group.scgrp-ec2.name
clddomain = module.cldfrnt.cld_domain
}
resource "null_resource" "browseropen" {
provisioner "local-exec" {
command = "echo ${module.efs.public_ip_instance} "
}
}
output "ipaddr_ec2" {
value = module.efs.public_ip_instance
}
output "cloudfronturl"{
value = module.cldfrnt.cld_domain
}
output "efsurl"{
value = module.efs.efsid}

Summing up all the modules.

Final main.tf file

provider "aws" {
region = "us-east-1"
shared_credentials_file = "../../.aws/credentials"
profile = "default"
}
module "s3" {
source = "./s3"
}
module "cldfrnt" {
source = "./clf"
name_s3 = module.s3.s3_object
}
//creating key pairs
//resource "tls_private_key" "key_pair" {
//algorithm = "RSA"
//}
resource "aws_security_group" "scgrp-ec2" {
name = "cld-security"
ingress {
// Rule for SSH
from_port = 22
protocol = "TCP"
to_port = 22
cidr_blocks = ["0.0.0.0/0"] //means everyone
}
ingress {
//For webapp http connection
from_port = 80
protocol = "TCP"
to_port = 80
cidr_blocks = ["0.0.0.0/0"]
}
ingress {
//for file system of nfs
from_port = 2049
protocol = "TCP"
to_port = 2049
cidr_blocks = ["0.0.0.0/0"]
}
egress {
from_port = 0
protocol = "-1"
to_port = 0
cidr_blocks = ["0.0.0.0/0"]
}
}
module "efs" {
source = "./filesystem"
scgroup = aws_security_group.scgrp-ec2.name
clddomain = module.cldfrnt.cld_domain
}
resource "null_resource" "browseropen" {
provisioner "local-exec" {
command = "echo ${module.efs.public_ip_instance} "
}
}
output "ipaddr_ec2" {
value = module.efs.public_ip_instance
}
output "cloudfronturl"{
value = module.cldfrnt.cld_domain
}
output "efsurl"{
value = module.efs.efsid
}

Now run the code from the folder containing main.tf and other module folders.

terraform init — this will initialize the files and acknowledge all changes in the file.

terraform plan

terraform apply

Look for our provisioned file output. I made in cat. so as to see if all is good.

Installations that’s behind the scenes. Becuase of the remote-exec provisioned of our ec2 instance module.

Installed the required packages for webserver and efs.

Finally you will see all our required things are running on aws. Here I got less resources than required because I ran terraform apply in intervals passing through some of the errors.

After adding outputs which I forgot ..:{!

Here is our EFS created on aws.

Here is the CloudFront distribution with origin id of s3 bucket domain.

And finally our launched webserver.

Finally our website Launced and Accessible!

Cross check these from console output!

Hurray, 🤩🤩All works great!

And Thanks for Reading!

--

--

Salik Sayyed

Machine Learning enthusiast. Getting help from Devops Tools,AWS,GKE and more. Expressing Creativity through Apps and writing blogs.A Computer Engineer.