Using AWS S3 as a simple cache service
S3 is great for file storage, but it does so much more as well. I love using S3 as a simple caching mechanism for any stateless functions that need to save some ephemeral data to keep state.
Traditionally, you would use in-memory caching tools like Redis for this, and Redis does still have its place as it will be faster than retrieving data from S3 in almost every case. However, if milisecond performance is not a concern, S3 is a cheap, low-effort and simple to implement alternative.
Some basic caching helpers
// cache.jsconst { S3 } = require('aws-sdk');
const s3 = new S3({ region: 'eu-west-1' });const { CACHE_BUCKET } = process.env;const get = async(key, defaultValue = null) => {
try {
const { Body } = await s3
.getObject({
Bucket: CACHE_BUCKET,
Key: `${key}.json`
})
.promise();
return JSON.parse(Body.toString());
}
catch(e) {
// File might not exist yet
return defaultValue;
}
};const set = (key, value) =>
s3
.putObject({
Bucket: CACHE_BUCKET,
Key: `${key}.json`,
Body: JSON.stringify(value)
})
.promise();module.exports = {
get,
set
};
You would use them like this (be sure to define the CACHE_BUCKET
environment variable):
const cache = require('./cache.js');