Using AWS S3 as a simple cache service
--
S3 is great for file storage, but it does so much more as well. I love using S3 as a simple caching mechanism for any stateless functions that need to save some ephemeral data to keep state.
Traditionally, you would use in-memory caching tools like Redis for this, and Redis does still have its place as it will be faster than retrieving data from S3 in almost every case. However, if milisecond performance is not a concern, S3 is a cheap, low-effort and simple to implement alternative.
Some basic caching helpers
// cache.jsconst { S3 } = require('aws-sdk');
const s3 = new S3({ region: 'eu-west-1' });const { CACHE_BUCKET } = process.env;const get = async(key, defaultValue = null) => {
try {
const { Body } = await s3
.getObject({
Bucket: CACHE_BUCKET,
Key: `${key}.json`
})
.promise();
return JSON.parse(Body.toString());
}
catch(e) {
// File might not exist yet
return defaultValue;
}
};const set = (key, value) =>
s3
.putObject({
Bucket: CACHE_BUCKET,
Key: `${key}.json`,
Body: JSON.stringify(value)
})
.promise();module.exports = {
get,
set
};
You would use them like this (be sure to define the CACHE_BUCKET
environment variable):
const cache = require('./cache.js');// ...await cache.set('my-key', { message: 'Hello, world!', boolValue: true });const value = await cache.get('my-key');
console.log(value); // { message: 'Hello, world!', boolValue: true }
Storage types
In our example code above, we’re storing values as JSON objects. Of course you’re able to store any type of content (binary, plain text) as you see fit for your purpose.
S3 Lifecycle Rules
Using S3 as a cache works especially well in combination with S3 Lifecycle configurations. This will allow you to automatically delete cache resources after a certain period of time has passed.
An example on how to set that up in CloudFormation:
Resources:
S3Bucket:
Type: 'AWS::S3::Bucket'
Properties:
BucketName: 'my-caching-bucket' # Set bucket name
AccessControl: Private…