SFTP Setup with AWS Transfer Family

SFTP Setup with AWS Transfer Family

This is a pretty straight forward blog and personal documentation on how to setup a SFTP server with s3 bucket as the backend of the SFTP server. I hope you find this helpful as I did.

Step 1.

Create S3 Bucket from the AWS console. The S3 Bucket will be the place where the files will stay. No custom changes will need to be made to the s3 bucket itself.

Step 2.

1*qwYG1stTUYNH48EY5gDAfA@2x.jpeg

Navigate to AWS Transfer Family service and select create Server. Where you will be prompted to select FTP, FTPS, SFTP. Select SFTP, which stands for SSH File Transfer Protocol. SFTP is a network protocol that provides file access, file transfer, and file management over any reliable data stream.

Screen Shot 2022-01-07 at 4.08.38 PM.png

Step 3.

Next we need to select identity provider, which in short is an access layer that will allow certain users to access SFTP server. You can do this programmatically via lambda and add your own custom identity provider, or in this case just create a single user at a time use the aws managed service

Screen Shot 2022-01-07 at 4.08.44 PM.png

Step 4.

Next, we select and setup our own custom DNS hostname. In this case we are just going to use AWS generated host that is provided. If you would like this could be modified in the future.

Options:

  • None: AWS will create a DNS record such as {some-random-generated-string}.server.transfer.{region}.amazonaws.com.
  • Route 53 Alias: the service will create an alias on Route 53 so you can use a more - friendly name and a subdomain such as ftps.martinpatino.com .
  • Other DNS: 3rd party DNS server option.

Screen Shot 2022-01-07 at 4.08.54 PM.png

Step. 5

Just select a domain, which will provide 2 options, which here we will just go with Amazon S3 as we will providing a user access to a s3 bucket with limited permissions to a specific bucket.

Once this step is completed, you will prompted to review your server summary and confirm, in which your SFTP server will be generated.

Step. 6

Configuration of IAM Roles with S3

So after we have created the SFTP server and created S3 bucket that you would like user to have access. The next part is to handle user role permissions and policy creation. In our case we want to restrict users to only be able to view a specific bucket. So just head to IAM create a custom SFTP role for your user in AWS under the service use case of Transfer.

You can copy and paste this and modify the custom-bucket-name field.

This rule will give the user access to delete, fetch, update and add files to your s3 bucket via FTP.

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Action": [
                "s3:ListBucket",
                "s3:GetBucketLocation"
            ],
            "Resource": [
                "arn:aws:s3:::{custom-bucket-name}"
            ],
            "Effect": "Allow",
            "Sid": "ReadWriteS3"
        },
        {
            "Action": [
                "s3:PutObject",
                "s3:GetObject",
                "s3:DeleteObject",
                "s3:DeleteObjectVersion",
                "s3:GetObjectVersion",
                "s3:GetObjectACL",
                "s3:PutObjectACL"
            ],
            "Resource": [
                "arn:aws:s3:::{custom-bucket-name}/*"
            ],
            "Effect": "Allow",
            "Sid": ""
        }
    ]

Step. 7

To create a user you will need username, s3 bucket you want the user have access to and the role they are associated with, which would be the one that was created above. Once you and bind your new SFTP_COMPANY_ROLE or whatever you called it to the user. In my example below, I am just calling user usercompany_a, but you can call it whatever you want. You will then have the option to setup a policy, which is not required. However, it will help locking down your user to only be able to have access to specific directory under a certain bucket.

Screen Shot 2022-01-07 at 4.51.42 PM.png

AWS Transfer does have auto-generate policy if you would like to use it, which would look like the example below.

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Sid": "AllowListingOfUserFolder",
      "Action": [
        "s3:ListBucket"
      ],
      "Effect": "Allow",
      "Resource": [
        "arn:aws:s3:::${transfer:HomeBucket}"
      ],
      "Condition": {
        "StringLike": {
          "s3:prefix": [
            "${transfer:HomeFolder}/*",
            "${transfer:HomeFolder}"
          ]
        }
      }
    },
    {
      "Sid": "HomeDirObjectAccess",
      "Effect": "Allow",
      "Action": [
        "s3:PutObject",
        "s3:GetObject",
        "s3:DeleteObject",
        "s3:GetObjectVersion"
      ],
      "Resource": "arn:aws:s3:::${transfer:HomeDirectory}*"
    }
  ]
}

The final step is add SSH Public Key, which the user should provide to you. The user will the use their own private key to connect to the SFTP via some SFTP client. This is a required field as without it. Your users will not be able to connect.

The last step is to add the user’s Public Key. They will need their Private Key to connect to the SFTP Server. If you don’t add the Public Key, the users can connect without any credentials.

Test time

It’s time to test the SFTP connection. I use CyberDuck, which is available on both Mac/Windows. Try to connect and if everything is setup correctly, you should be able to see the content in you S3 bucket. You should even be able to upload files from your SFTP client.

image.png

Success!!!

Hope you enjoy learning a bit!

Did you find this article valuable?

Support Martin Patino by becoming a sponsor. Any amount is appreciated!