I started the new year by getting kicked off Cloudinary’s free tier for image storage.
Deciding that I do not want to have to migrate ever again, I have architected a simple way to get off of SaaS/Cloud services and still maintain control of my media. I was not willing to pay $99 a month for a hosted image service (I never used their transformations etc.) - and while Cloudinary offers a generous free tier for storage, I got hit on the bandwith limitations (a good problem to have, I suppose!)
This will serve as a guide how I have setup the infrastructure, and how I migrated away from Cloudinary’s nice, albeit, walled garden.
The Migration off Cloudinary#
The Media#
I simply downloaded all media from Cloudinary and saved it locally on my machine, in directories for each site/project.
Files were not named well, so I took the hour or so of going through in emacs (dirvish/dired) and renaming files so as to have logical paths for my object storage. I then placed the media in directories for my projects, and was ready for the migration.
The file tree looks something like this:
.Media├── project1├── project2├── joshblais│ └── post├── Mountainthebook├── company│ ├── project│ ├── logos│ ├── project│ │ └── projectlisting│ └── blog└── project3It was now cleaned and ready to be shipped to object storage.
S3 Compatible Object Storage#
Hetzner offers a pretty decent s3 compatible object storage, so I spun up an instance for this:
Create Credentials#
Upon creation of the bucket, you will be prompted to save your credentials - Copy these to your password store for reference later.
rclone.conf:
[hetzner]type = s3provider = Otherenv_auth = falseaccess_key_id = YOUR_KEYsecret_access_key = YOUR_SECRETendpoint = fsn1.your-objectstorage.comacl = public-readCopying media over to the Bucket#
I then rcloned the whole Media directory to an S3 compatible storage bucket, and when referencing urls, they are done like so:
~https://cdnlocation.net/joshblais/vimiumc2.png~
I created a script for this using a nix-shebang, defining rclone as a dependency, and now it’s portable to any system running nix, nice.
#!/usr/bin/env nix-shell#!nix-shell -i bash -p rclone
set -euo pipefail
MEDIA_DIR="${1:-$HOME/Media}"BUCKET="your-bucket-name"REMOTE="hetzner"
rclone sync "$MEDIA_DIR" "$REMOTE:$BUCKET" \ --progress \ --checksum \ --transfers 8Renaming tags in projects#
The (not) fun part is migrating your tags in all your projects. In emacs, I wrote a quick lisp function to go through the projects to hunt down cloudinary references:
(defun jb/hunt-cloudinary () "Interactively hunt and migrate cloudinary references." (interactive) (consult-ripgrep nil "cloudinary"))
(map! :leader (:prefix ("j" . "jump") :desc "Hunt cloudinary refs" "t" #'jb/hunt-cloudinary))Replacing with the correct image and path. This took a few hours to do so, but in the future, should never happen again as I have all images named correctly (and continue to do so upon adding new ones) - if I need to migrate storage hosts, I will just change the cdn name in a project wide find-replace.
Bunny CDN for caching images globally#
One thing I lost with Cloudinary is the ability to have a global CDN caching my images. This was easily impletmented with BunnyCDN - I created a pull zone, pointed it to my hetzner storage bucket, and it just works.
Cost#
So, while Cloudinary wanted to charge me $99 a month, this setup costs ~€5/month. I don’t get the image transformations baked in, but I am likely going to add either some pre-processing in the upload pipeline or at runtime via imgproxy.
Overall, I am very happy with the setup, it is completely web-GUI free, fast, and quite robust if there is ever a migration needed in the future. The 1TB plan with Hetzner is very reasonable and gives ton of overhead for years to come.
As always, God bless, and until next time.
If you enjoyed this post, consider Supporting my work, Checking out my book, Working with me, or sending me an Email to tell me what you think.