Deploy Infrastructure(website) On AWS and integrating with EFS(storage) using Terraform

Sanket Bendale
5 min readAug 9, 2020

--

Here I have updated my Task1 by Performing all the steps as done before but instead of EBS storage class here I am using EFS service.

What is EFS?

Amazon Elastic File System (Amazon EFS) provides a simple, scalable, fully managed elastic NFS file system for use with AWS Cloud services and on-premises resources. It is built to scale on-demand to petabytes without disrupting applications, growing and shrinking automatically as you add and remove files, eliminating the need to provision and manage capacity to accommodate growth.

Amazon EFS is a fully managed service providing NFS shared file system storage for Linux workloads. Amazon EFS makes it simple to create and configure file systems. You don’t have to worry about managing file servers or storage, updating hardware, configuring software, or performing backups. In seconds, you can create a fully managed file system by using the AWS Management Console, the AWS CLI, or an AWS SDK.

Amazon EFS is a regional service storing data within and across multiple Availability Zones (AZs) for high availability and durability. Amazon EC2 instances can access your file system across AZs, regions, and VPCs, while on-premises servers can access using AWS Direct Connect or AWS VPN.

Task :

  1. Create the key and security group which allows the port 80.
  2. Launch EC2 instance.
  3. In this Ec2 instance use the key and security group which we have created in step 1.
  4. Launch one Volume using the EFS service and attach it in your vpc, then mount that volume into /var/www/html
  5. The developer has uploaded the code into GitHub repo also the repo has some images.
  6. Copy the GitHub repo code into /var/www/html
  7. Create an S3 bucket, and copy/deploy the images from GitHub repo into the s3 bucket and change the permission to public readable.
  8. Create a Cloudfront using s3 bucket(which contains images) and use the Cloudfront URL to update in code in /var/www/html.

First, create one provider which help terraform to download the right plugin in your system for further execution. provider “aws” { region = “ap-south-1” profile = “sanket”}

provider "aws" { region = "ap-south-1" profile = "sanket" }

Create Security Group and Key — Create a security group that allows port 80 and also port 2049 to manage NFS and security key and use them while launching EC2 instance.

resource "aws_security_group" "allow_http" {
name = "allow_http"
ingress {
from_port = 80
to_port = 80
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
ingress {
from_port = 22
to_port = 22
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
ingress {
from_port = 8080
to_port = 8080
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}
tags = {
Name = "allow_http"
}
}

Creating EC2 instance and use the above security group and the key to launch it. After launching instance if we need any software to install so instead of doing this manually here we use provision remote access.

resource "aws_instance" "myos" {
ami = "ami-0447a12f28fddb066"
instance_type = "t2.micro"
key_name = "key2"
security_groups = [ "allow_http" ]
connection {
type = "ssh"
user = "ec2-user"
private_key = file("C:/Users/sai/Downloads/key2.pem")
host = aws_instance.myos.public_ip
}
provisioner "remote-exec" {
inline = [
"sudo yum install httpd -y",
"sudo yum install php -y",
"setenforce 0",
"sudo systemctl start httpd",
"sudo systemctl enable httpd",
]
}
tags = {
Name = "cloud-os"
}
}

Creating an efs volume in my default VPC and same security group as of instance. And then attach it to ec2 instance.

resource "aws_ebs_volume" "ebs1" {
availability_zone = aws_instance.myos.availability_zone
size = 2

tags = {
Name = "ebs1"
}
}
resource "aws_volume_attachment" "ebs_att" {
device_name = "/dev/sdr"
volume_id = aws_ebs_volume.ebs1.id
instance_id = aws_instance.myos.id
force_detach = true
}

After creating new volume first mount it to /var/www/html. And Also Downloading the code from GitHub repo. into /var/www/html.

resource "null_resource" "nullremote3"  {depends_on = [
aws_volume_attachment.ebs_att,
]
connection {
type = "ssh"
user = "ec2-user"
private_key = file("C:/Users/sai/Downloads/key2.pem")
host = aws_instance.myos.public_ip
}
provisioner "remote-exec" {
inline = [
"sudo mkfs.ext4 /dev/xvdh",
"sudo mount /dev/xvdh /var/www/html",
"sudo rm -rf /var/www/html/*",
"sudo git clone https://github.com/sanket3122/task1_cloud.git /var/www/html/"
]
}
}

Now create S3 bucket and also create a bucket policy. and upload the images in the bucket.

resource "aws_s3_bucket" "b" {
bucket = "sanketbendale712345"
acl = "public-read"
tags = {
Name = "sanketbendale712345"
}
}
locals {
s3_origin_id = "myS3Origin"
}
output "b" {
value = aws_s3_bucket.b
}
resource "aws_cloudfront_origin_access_identity" "origin_access_identity" {
comment = "This is origin access identity"
}
output "origin_access_identity" {
value = aws_cloudfront_origin_access_identity.origin_access_identity
}
data "aws_iam_policy_document" "s3_policy" {
statement {
actions = ["s3:GetObject"]
resources = ["${aws_s3_bucket.b.arn}/*"]
principals {
type = "AWS"
identifiers = ["${aws_cloudfront_origin_access_identity.origin_access_identity.iam_arn}"]
}
}
statement {
actions = ["s3:ListBucket"]
resources = ["${aws_s3_bucket.b.arn}"]
principals {
type = "AWS"
identifiers = ["${aws_cloudfront_origin_access_identity.origin_access_identity.iam_arn}"]
}
}
}
resource "aws_s3_bucket_policy" "example" {
bucket = aws_s3_bucket.b.id
policy = data.aws_iam_policy_document.s3_policy.json
}

Create Cloudfront of S3 bucket where images uploaded.

resource "aws_cloudfront_distribution" "s3_distribution" {
origin {
domain_name = aws_s3_bucket.b.bucket_regional_domain_name
origin_id = local.s3_origin_id
s3_origin_config {
origin_access_identity = aws_cloudfront_origin_access_identity.origin_access_identity.cloudfront_access_identity_path
}
}
enabled = true
is_ipv6_enabled = true
default_cache_behavior {
allowed_methods = ["GET", "HEAD"]
cached_methods = ["GET", "HEAD"]
target_origin_id = local.s3_origin_id
forwarded_values {
query_string = false
cookies {
forward = "none"
}
}
viewer_protocol_policy = "redirect-to-https"
min_ttl = 0
default_ttl = 3600
max_ttl = 86400
}
restrictions {
geo_restriction {
restriction_type = "none"
}
}
viewer_certificate {
cloudfront_default_certificate = true
}
}

Thus our website is successfully deployed on the AWS cloud using terraform. Now by using only one command terraform apply we can set-up our environment and using terraform destroy we can destroy all. This is the power of terraform.

OUTPUTS OF THE TASK:

https://github.com/sanket3122/Cloud-Task2.git

Thanks for reading…….!

--

--