Troubleshooting & How-Tos 📡 🔍 Servers

Setting Expiration and Lifecycle Rules on Non-AWS S3 Storage

DigitalOcean Spaces, DreamHost’s DreamObjects*, and Linode Object Storage all offer cloud storage that’s compatible with Amazon’s S3.

I have a couple of buckets that have been filling up with old logs and backups, and I wanted to set up a rotation script to clear out the oldest ones. On AWS I’d set up lifecycle policies through the web interface. But the web UIs for Spaces and DreamObjects are limited to the basics: creating and deleting buckets and access keys, listing files objects, that sort of thing.

I considered writing a shell script using s3tools to list the files, figure out which ones to delete, and then clear them out, but figured if it was built-in to S3 on Amazon, there had to be an easier way to do it!

The solution should have been obvious: Object storage is mostly used through APIs. Web control panels are kind of extra. I “just” had to look up how to set an expiration policy through the API.

This article at How-To Geek is what pointed me in the right direction. The examples are with DigitalOcean and the official AWS commandline client, but it was easy enough to adapt to s3tools now that I knew what approach to take!

Easy: S3Tools

S3tools actually has a simple command for setting an expiration policy!

s3cmd expire --expiry-days=30 --expiry-prefix=mylogprefix s3://mybucket

Alternatively you can set a specific --expiry-date=___ instead of the number of days.

You can confirm the policy is set by running:

s3cmd getlifecycle s3://mybucket

The downside is that the expire command will only set one expiration policy per bucket. Run it again with different parameters and it replaces the old one. If you want to keep one set of files for 30 days and another set of files for 10 days, you’ll have to write a policy file yourself (see below!) and upload it with:

s3cmd setlifecycle FILE s3://mybucket

Complicated: Lifecycle Policy Files

Back to that How-To Geek article, they have examples of the JSON-formatted policy file:

{
    "Rules": [
        {
            "ID": "Prune old files",
            "Status": "Enabled",
            "Prefix": "",
            "Expiration": {
                "Days": 30
            }
        }
    ]
}

Notice that “Rules” is an array, so you can add more items to it to set different policies for different prefixes. I haven’t tried it yet, but it ought to work with s3tools too.

And the AWS CLI

For completeness, here are their examples for uploading and checking the file using the AWS CLI:

aws s3api put-bucket-lifecycle-configuration 
    --bucket my-bucket
    --endpoint https://nyc3.digitaloceanspaces.com
    --lifecycle-configuration file://my-policy.json
aws s3api get-bucket-lifecycle-configuration 
    --bucket my-bucket
    --endpoint https://nyc3.digitaloceanspaces.com