To rename a folder on a traditional file system is a piece of cake but what if that file system wasn't really a file system at all? In that case, it gets a little trickier! Amazon's S3 service consists of objects with key values. There are no folders or files to speak of but we still need to perform typical file system-like actions like renaming folders.

Renaming S3 "folders" isn't possible; not even in the S3 management console but we can perform a workaround. We can create a new "folder" in S3 and then move all of the files from that "folder" to the new "folder". Once all of the files are moved, we can then remove the source "folder".

To do this, use Python and the boto3 module. If you're working with S3 and Python and not using the boto3 module, you're missing out. It makes things much easier to work with.

Prerequisites

For the demonstration I'll be showing you to work, you'll need to meet a few prereqs ahead of time:

  • MacOS/Linux
  • Python 3+
  • The boto3 module (pip install boto3 to get it)
  • An Amazon S3 Bucket
  • An AWS IAM user access key and secret access key with access to S3
  • An existing "folder" with "files" inside in your S3 bucket

Renaming an Amazon S3 Key

To rename our S3 folder, we'll need to import the boto3 module and I've chosen to assign some of the values I'll be working with as variables.

import boto3

awsAccessKey = ''
awsSecretAccessKey = ''
s3BucketName = ''
oldFolderKey = ''
newFolderKey = ''

Once I've done that, I'll need to authenticate to S3 by providing my access key ID and secret key for the IAM user I'll be using. In this case, I've chosen to use a boto3 session. I'll be using a boto3 resource to work with S3.

session = boto3.Session(aws_access_key_id=awsAccessKey,     aws_secret_access_key=awsSecretAccessKey)
s3 = session.resource('s3')

Once I've done that, I then need to find all of the files matching my key prefix. You can see below that I'm using a Python for loop to read all of the objects in my S3 bucket. I'm using the optional filter action and filtering all of the S3 objects in the bucket down to only the key prefix for the folder I want to rename.

bucket = s3.Bucket(s3BucketName)
for object in bucket.objects.filter(Prefix=oldFolderKey):

Once I've started the for loop iterating over the "folder" key and all of the "file" keys inside of it, I'll then need to exclude the "folder" key itself since I won't be copying that. I just need the file keys. I'm excluding that by an if statement that matches all key values that don't end with a forward slash.

After I'm in the block that will only contain file key values, I'm now assigning the file name and destination key names to make it easier to reference.

for object in bucket.objects.filter(Prefix=oldFolderKey):     srcKey =
    object.key
    if not srcKey.endswith('/'):
        fileName = srcKey.split('/')[-1]
        destFileKey = newFolderKey + '/' + fileName
        copySource = s3BucketName + '/' + srcKey         
        s3.Object(s3BucketName, destFileKey).copy_from(CopySource=copySource)

Once you have all of that setup, I then finally do the actual copy using the copy_from action. You can see below that I'm creating an S3 object using the bucket name and destination file key. I'm then passing the source key to the copy_from action.

for object in bucket.objects.filter(Prefix=oldFolderKey):     srcKey = 
    object.key
    if not srcKey.endswith('/'):
        fileName = srcKey.split('/')[-1]
        destFileKey = newFolderKey + '/' + fileName
        copySource = s3BucketName + '/' + srcKey         
        s3.Object(s3BucketName, destFileKey).copy_from(CopySource=copySource)

Once the loop has finished and all of the files have been copied to the new key, I'll then need to use the delete action to clean all of the files including the "folder" key since it is not inside of the if condition.

for object in bucket.objects.filter(Prefix=oldFolderKey):     srcKey = 
    object.key
    if not srcKey.endswith('/'):
        fileName = srcKey.split('/')[-1]
        destFileKey = newFolderKey + '/' + fileName
        copySource = s3BucketName + '/' + srcKey         
        s3.Object(s3BucketName, destFileKey).copy_from(CopySource=copySource)
        s3.Object(s3BucketName, srcKey).delete()

Summary

At this point, we're done! You should now see all of the files that were previously in the source key under the destination key with no sign of the source key!

Join the Jar Tippers on Patreon

It takes a lot of time to write detailed blog posts like this one. In a single-income family, this blog is one way I depend on to keep the lights on. I'd be eternally grateful if you could become a Patreon patron today!

Become a Patron!