How To Blacklist or Block an IP Address in AWS API Gateway

Source

Blocking or blacklisting an IP address is a useful way to thwart bad actors mis-using your API.

It turns out there’s a really easy way to block ip addresses in API gateway for your REST API.

To get started, head over to the Resources tab of your REST API as seen below.

Navigate to the Resource Policy section of API gateway to add our IAM policy.

To blacklist or block an IP address, you want to enter the following IAM Policy Statement. Make sure to replace the IP address with the one you want to block.

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Principal": "*",
            "Action": "execute-api:Invoke",
            "Resource": "*"
        },
        {
            "Effect": "Deny",
            "Principal": "*",
            "Action": "execute-api:Invoke",
            "Resource": "*",
            "Condition": {
                "IpAddress": {
                    "aws:SourceIp": [
                        "IP_ADDRESS_TO_BLOCK_HERE"
                    ]
                }
            }
        }
    ]
}

This statement will allow users to invoke it, but block access to any ip addresses that you specify. Note that this is a list and you can add one or more ip addresses to block.

Note: to make the change live, you need to re-deploy your API to your stage. This can be done by navigating to the resource and clicking Deploy.

After a few minutes (it took 5 or so for it to reflect for me), the user will be denied all access to the API. An error message similar to the following will be shown to the user:

{
  "Message": "User: anonymous is not authorized to perform: execute-api:Invoke on resource: arn:aws:execute-api:us-east-1:********5794:sg1v5qzfuk/test/GET/dogs with an explicit deny"
}

If you want to test this out, you can use this website to find your own IP and experiment with this feature.

How to Upload Multiple Files to AWS S3 using Terraform

Source

Problem

I want to upload multiple files from a specific folder to an AWS S3 bucket.

Assumptions

The S3 bucket name is test.

The directory structure is as follows.

documents
|- file_1
|- subdirectory1
|  |- file_1_1
|  |- file_1_2
|  |- subdirectory2
|  |  |- file_1_2_1

We want to end up with the following S3 objects.

  • s3://test/file_1
  • s3://test/subdirectory1/file_1_1
  • s3://test/subdirectory1/file_1_2
  • s3://test/subdirectory1/subdirectory2/file_1_2_1

Solution

resource "aws_s3_bucket_object" "test" {
  for_each = fileset("./documents/", "**")
  bucket = "test"
  key = each.value
  source = "./documents/${each.value}"
  etag = filemd5("./documents/${each.value}")
}

  • Line 1:: Create an S3 bucket object resource.
  • Line 2:: Use a for_each argument to iterate over the documents returned by the fileset function. for_each identifies each resource instance by its S3 path, making it easy to add/remove files. The fileset function enumerates over a set of filenames for a given path. It uses ** as the pattern for a recursive search.
  • Line 3:: The name of the bucket to put the files in.
  • Line 4:: The object’s name once it’s in the bucket. In the example above, it is the same as the path.
  • Line 5:: the Path to the file to be uploaded.
  • Line 6:: Triggers an update only if the file changes. The eTag of each object is an MD5 hash of that object.

Terraform S3 Tutorial – Easy AWS Automation

Uploading File Trees to S3 with Terraform

Using Terraform for S3 Storage with MIME Type Association

Abusing Terraform to Upload Static Websites to S3