S3 hosting

2020.06.26

I have chosen the following setup because of simplicity and cost (the only paid part is S3 and even this is quite minimal).

Setup consists of:

hugo

I wanted some simple framework that builds website/blog from markdown files. Hugo seems to be very good option.

build static pages

I use makefile build target to generate static pages:

HUGO_PACKAGE ?= github.com/gohugoio/[email protected]

build:
	docker run --rm -v "${PWD}":/usr/src/myapp -w /usr/src/myapp -e GO111MODULE=on golang:1.14-alpine sh -c "go get ${HUGO_PACKAGE} && hugo"

this way user does not even need to have hugo framework installed. Simply add or edit markdown pages under content directory and run make build to generate public directory with static pages.

test site locally

Changes can be tested locally with make local-run target:

HUGO_PACKAGE ?= github.com/gohugoio/[email protected]

local-run:
	docker build --build-arg hugo_package=${HUGO_PACKAGE} -t reisinger/reisinger.co.uk:dev .
	docker run --rm --name reisinger.co.uk -p 8080:80 reisinger/reisinger.co.uk:dev

plus corresponding Dockerfile:

FROM golang:1.14-alpine AS build

WORKDIR /root
ARG hugo_package
RUN GO111MODULE=on go get $hugo_package
COPY . .
RUN sed -i 's/baseURL.*/baseURL = "http:\/\/localhost:8080\/"/' config.toml
RUN hugo -D

FROM nginx:1.19
COPY --from=build /root/public/ /usr/share/nginx/html

s3

Create s3 bucket with “Block all public access” turned off.

Configure s3 bucket for static web hosting. Select bucket and:

  • configure website hosting: properties -> static website hosting -> use this bucket to host a website. For index document select index.html and for error document select 404.html.
  • enable public access (skip if you already done this when creating bucket): permissions -> block public access -> edit -> "un-tick" Block all public access
  • add bucket policy for cloudflare IP addresses: permissions -> bucket policy (replace <bucket-name> in Resource section):
{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "CloudflareReadGetObject",
            "Effect": "Allow",
            "Principal": "*",
            "Action": "s3:GetObject",
            "Resource": "arn:aws:s3:::<bucket-name>/*",
            "Condition": {
                "IpAddress": {
                    "aws:SourceIp": [
                        "2400:cb00::/32",
                        "2606:4700::/32",
                        "2803:f800::/32",
                        "2405:b500::/32",
                        "2405:8100::/32",
                        "2a06:98c0::/29",
                        "2c0f:f248::/32",
                        "173.245.48.0/20",
                        "103.21.244.0/22",
                        "103.22.200.0/22",
                        "103.31.4.0/22",
                        "141.101.64.0/18",
                        "108.162.192.0/18",
                        "190.93.240.0/20",
                        "188.114.96.0/20",
                        "197.234.240.0/22",
                        "198.41.128.0/17",
                        "162.158.0.0/15",
                        "104.16.0.0/12",
                        "172.64.0.0/13",
                        "131.0.72.0/22"
                    ]
                }
            }
        }
    ]
}

List of source IPs is taken from cloudflare ip ranges site.

travis

Travis runs make build and then pushes content of generated public directory to s3 bucket. Content of .travis.yml file (replace <bucket-name> and <region>):

language: minimal

git:
  depth: false
  quiet: true
  submodules: true

branches:
  only:
    - master

script:
  - "make build"

deploy:
  provider: s3
  access_key_id: $AWS_ACCESS_KEY
  secret_access_key: $AWS_SECRET_KEY
  bucket: <bucket-name>
  region: <region>
  skip_cleanup: true
  local_dir: ./public
  verbose: true

git.submodules is set to true so that hugo theme can be cloned as well. Travis needs to be configured with environment variables (travis repository -> more options -> settings) AWS_ACCESS_KEY and AWS_SECRET_KEY.

Do NOT use root aws account credentials, but crate new AWS IAM user with “Programmatic access” instead and attach policy that only gives access to your bucket (replace <bucket-name>):

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "VisualEditor0",
            "Effect": "Allow",
            "Action": [
                "s3:PutObject",
                "s3:GetObjectAcl",
                "s3:GetObject",
                "s3:AbortMultipartUpload",
                "s3:DeleteObject",
                "s3:PutObjectAcl"
            ],
            "Resource": [
                "arn:aws:s3:::<bucket-name>",
                "arn:aws:s3:::<bucket-name>/*"
            ]
        }
    ]
}

cloudflare

Create cloudflare account and add site.

Configure DNS and Page rules in cloudflare:

  • select site
  • under DNS tab, add two records:
    • type: A, Name: www, Content: 192.0.2.1 (dummy record, this will be handled by page rule)
    • type: CNAME, Name: <site-name>, Content: <bucket-name>.s3-website.<region>.amazonaws.com (replace <site-name>, <bucket-name> and <region>)
  • under SSL/TLS tab select flexible encryption mode
  • under Page Rules create page rule for pattern www.<site>/* (replace <site>) with settings:
    • Forwarding URL -> 301 - Permanent Redirect
    • https://<site>/$1 (replace <site>)

It can take a bit of time to propagate all the DNS changes.