Blog

Explore our community and get access to our blog posts

  • How to Use AWS S3 to host your static websites

    New Post

    How to Use AWS S3 to host your static websites

    Cloud technologies are fast becoming the mainstay of what we use to develop custom solutions for consumers. Amazon Web Services(AWS) has become the dominant player and go cloud service for developers. I particularly like the variety of services, tools, and metrics at my disposal when using AWS in comparison to other cloud options.

    In this article, I’ll give you a brief overview of one of my favourite AWS Services – AWS S3. I’ll also show you how you can use an AWS S3 bucket to host your static websites.

    Overview of AWS S3

    Amazon S3 is one of the building blocks of AWS. It is advertised as an “infinitely scaling storage”. Amazon S3 allows people to store objects (‘files’) in “buckets” (directories).

    Buckets

    Buckets are defined at a region level (AWS Regions). Buckets also have a standard naming convention – they must contain no uppercase letters, no underscore, must be between 3-63 characters long, must not be an IP, and must start with lowercase or a number. One thing to note is that “buckets” must have a globally unique name.

    Object

    Objects have a key, and that key is what is known as the full path e.g s3://example-bucket/myfile.txt. The key is usually comprised of the prefix + Object name. The Amazon S3 Ui might trick you into believing S3 buckets are made up of ‘directories’ but that is not the case. Object values are the content of the body and have a max size of 5Terabytes (5000 Gigabytes). When uploading a file that’s greater than 5GB, you must use the multipart upload. Multi-part upload is also advised for files greater than 100MB even though it isn’t compulsory to do so.

    Versioning

    You can version your files on S3, and versioning is enabled at the bucket level. The same key overwrites will lead to increments in versions. It’s best practice to version your bucket so as to protect against unintended deletes and roll back to old versions. It is important to note that any file that is not versioned prior to enabling versioning will have version “null”.

    Encryption

    There are four methods of encrypting objects in S3:

    • SSE-S3: encrypts S3 objects using keys handled & managed by AWS. The Object is encrypted server-side and uses an AES-256 encryption type. The header must be set to “x-amz-server-side-encryption”: “AES256”
    • SSE-KMS: leverage AWS Key Management Service to manage encryption keys. The object is also encrypted server-side and has the advantage of having additional user controls and an audit trail for debugging purposes. The header must be set to “x-amz-server-side-encryption”: ”aws:kms”
    • SSE-C: when you want to manage your own encryption keys. S3 does not store the encryption key you provide. HTTPS must be used in order to use this encryption type. This is important as the data key must be provided in the header and as such must be protected with TLS. An encryption key must be provided in HTTP headers for every HTTP request made
    • Client-Side Encryption: Clients must encrypt data themselves before sending data themselves, and are also responsible for decrypting after retrieving the data from S3. The customer fully manages the keys and encryption cycle

    Security

    S3 has IAM policies that help determine what API calls should be allowed for a specific user from the IAM console. S3 Bucket Policies help decide bucket wide rules and also allows cross-account access.

    S3 has access logs and they can be stored in an S3 bucket. API calls to S3 can be logged in AWS CloudTrail. S3 has multifactor authentication delete (MFA Delete). MFA delete can be required in versioned buckets to delete objects. Pre-Signed URL’s can also be generated to allow authenticated users to access the S3 bucket for a limited amount of time.

    Now, let’s run through how to deploy our static website to AWS S3.

    Deploying our Static Website to AWS S3

    For the purpose of this walkthrough tutorial, we will be using a pre-built sample of a static website, which you can download using this URL from  GitHub – Aeeiee-Team/Restaurant-static-website.

    Step 1- Create or Sign in to AWS Account

    It is important you have an AWS account but if you don’t have one, you can create a free AWS account from Amazon. Click here to create or sign in to your AWS Account. Enter your AWS username and password and it will open the AWS Management Console page. Click on All services and choose AWS S3.

    Step 2- Create a Bucket

    Click on the “Create bucket” button highlighted in orange colour to create a new bucket. Make sure the name of your bucket is globally unique.

    Before you continue, make sure to enable versioning in order to prevent unintentional deletes and the ability to roll back.

    Now you can go ahead to create a bucket by clicking the “Create bucket” button.

    Step 3 – Upload our static files to the Bucket

    After creating the bucket, you will see the bucket under the list of buckets.

    Let’s go ahead and click on the bucket we want to host our website on. Once you do this, you should be rerouted to a page where you can upload the static files to the bucket, as shown below. Click upload.

    Step 4 – Edit Access Control List (ACL) and Grant Public Access

    After uploading the files/folders, you have to edit the Access Control List (ACL) and select the grant public access radio button.

    Step 5: Enable Static Website Hosting

    When your files are successfully uploaded, navigate to the properties tab of the bucket and scroll down to the bottom.

    Click on the edit button of the static website hosting property and select the ‘enable’ radio button, then specify the home or default page of the website which in this case will be “index.html”. You can also create a page that will be displayed when an error is encountered in the error document input form. When you are done, save your changes.

    Step 6: Dealing with 403 “Forbidden error”

    After saving the changes, you should see a link through which the contents of the bucket will be accessible.

    Now, navigate to the link in a browser. It should return a 403 “Forbidden error”, page and that’s because the bucket policy has not been changed. The next action would be to uncheck the default policy on the bucket and create our own bucket policy that grants read access to the public.

     

    Step 7: Creating your own bucket policy

    Let’s go back to the console, check our S3 bucket, and navigate to the permissions tab. You’ll see the Block public access (bucket settings). Go ahead and click on the edit button.

    You can also uncheck the block public access checkbox and save changes.

    You’ll get a warning after clicking “Save changes”. This is normal, so, just follow the instructions on the modal to confirm the changes.

    Under the “Block public access (bucket settings)”, scroll down to the “Bucket Policy”, and click on edit.

    After clicking on edit, you will get re-routed to another page where you can specify your bucket policy in JSON format.

    You can either paste your pre-generated JSON code or use the policy generator to generate your bucket policy. For the purpose of this tutorial, I’m going to use the policy generator to generate a bucket policy to allow all get object requests to the object on all routes. I’ll click on the policy generator and be rerouted to another page where I can generate the policy based on my set requirements.

    I am going to choose S3 as the service I want to generate a policy for, then choose “allow” as the effect. I’ll then go ahead to choose “*” as the principal which means all. Next, I’ll specify the actions I want to allow which in this case will be the “GetObject” action.

    There is a need to provide the Amazon resource name (ARN) which can be found at the top of the bucket policy. For this tutorial, I’m also going to be adding a “/*” to the ARN which means anyone can do a getObject on any object of this bucket.

    I’ll then go ahead to click on “Add Statement”, and “Generate policy”. This should open up a modal with a JSON Policy already generated.

    {
      "Id": "Policy1632752556611",
      "Version": "2012-10-17",
      "Statement": [
        {
          "Sid": "Stmt1632752535612",
          "Action": [
            "s3:GetObject"
          ],
          "Effect": "Allow",
          "Resource": "arn:aws:s3:::statics-website-example/*",
          "Principal": "*"
        }
      ]
    }

    I’ll then copy this policy and paste it into my bucket policy and save the changes.

    After saving the changes, I’ll go back to our link to access the website and refresh. The website should now be accessible.

    In conclusion, you can see that it’s very easy and straightforward to host your static websites on AWS S3. This is a simple example of how to do it. There are more complex and exciting things that can be done using S3 as a static host. This simple tutorial would have helped you lay the foundation on which you can go on to build more exciting things.

Find this useful? Subscribe and receive more of these posts.