Monthly Pulse: March 2025
| Estimated reading time: 6 minutes
March was a month full of kicking off projects that are mostly still in progress. The largest of them is migrating away from AWS as my primary hoster, which has turned out to be more complex than I’d originally expected. Some of them however went so smoothly that they didn’t take more than a single day, like moving my backups from Backblaze to Hetzner. The complexity of each of those projects lies in the different features each provider offers, and whether I had to find another approach for something that can’t work at the new provider, because they don’t support a given feature.
From Backblaze to Hetzner
Hetzner was one of my very first hosters 25 years ago and I remember having a good user experience with them back then. They were on the pricier side – especially for a teenager –, but they were reliable, and they had a better user interface for their services than many of their competitors at that time.
Maybe it’s just their new S3-compatible offering Hetzner Object Storage, but its user interface is… pretty mediocre and almost minimalist, as far as features go. I think Hetzner expects their customers to use their API, either directly or through third-party tools, because I’ve only been able to do practically everything I needed to do through the API.
For my backups, I didn’t have to do much though. Create a bucket, create credentials, and point my NAS at that bucket instead of the one at Backblaze. Done.
Pricing is similar for both: Hetzner charges 5.94 EUR for every TB, rounded up; Backblaze B2 asks for 6 USD per TB, measured at a grain finer than 1 TB. I couldn’t find anything more detailed than this:
Service is billed monthly, based on the amount of data stored per byte-hour over the last month at a rate of $6/TB/30-day.
Backblaze however charges an additional 19% for VAT, which is already included in the Hetzner pricing. Due to the aggressive rounding up to the next TB, Hetzner isn’t necessarily less expensive for me just for the backups. But I had plans to move some of my S3 buckets from AWS over to Hetzner, as well, and the benefit of having everything in one place is worth it for me.
Since I only need the very barebones functionality of an S3 bucket – I don’t even need object versioning –, this change was very easy to implement and very uneventful.
From AWS to Hetzner… and Gcore
As some of my domains were going to expire in April 2025, I decided to do more than just renewing them: I moved them to another registrar, which in turn started a cascade of changes that culminated in leaving behind AWS entirely. They were great for me for the almost 18 years – and still counting, as I won’t terminate my account for now – I’ve been with them. Even though my recent AWS bills were pretty much just for Route 53 hosted zones and S3 storage, a closer look revealed that I was using much more than just that, thanks to free tiers. An even closer look made me realise how not-so-trivial moving somewhere else might be: My Cloudfront distributions had some… complex configurations.
I will dive deeper into this migration when everything’s done. It turned out that some of the changes I eventually had to make would’ve allowed me to pick a provider from more candidates than I narrowed them down to.
Gcore ended up being “it” for me, because of their support for something that apparently not many providers support: “CNAME flattening”, essentially CNAME
records for apex domains. CNAME
records technically cannot exist along with other records for the same hostname, but the apex domain must have NS
and SOA
records.
AWS Route 53 works around that by allowing aliases to AWS entities like Cloudfront distributions, API Gateways, or load balancers in their A
and AAAA
records, which they then resolve to actual IP addresses to build correct DNS responses. This also allows them to return the “best” set of IP addresses for Cloudfront distributions based on the requester’s location, if they have enough data points to do so.
Gcore does something similar, maybe minus the location-based optimisations – I haven’t found anything concrete on those optimisations –: They resolve the CNAME
record, which can be anything and is not limited to Gcore hostnames, on the apex domain to an IP address, cache that for the TTL duration, and respond with A
and AAAA
records.
$ host -t a huydinh.eu ns1.gcorelabs.net
Using domain server:
Name: ns1.gcorelabs.net
Address: 92.223.100.53#53
Aliases:
huydinh.eu has address 81.28.12.12
$ host -t aaaa huydinh.eu ns1.gcorelabs.net
Using domain server:
Name: ns1.gcorelabs.net
Address: 92.223.100.53#53
Aliases:
huydinh.eu has IPv6 address 2a03:90c0:999c::12
Unlike AWS, where you don’t actually create a CNAME
record, you do so on Gcore and if you ask for such a CNAME
record, you will receive one.
$ host -t cname huydinh.eu ns1.gcorelabs.net
Using domain server:
Name: ns1.gcorelabs.net
Address: 92.223.100.53#53
Aliases:
huydinh.eu is an alias for cl-gld491d3f2.gcdn.co.
So far, it seems to be working as expected, and I’m not aware of anything breaking yet. But since defying standards can break things pretty badly, I also had to limit the blast radius by changing the few email addresses I had left on @huydinh.eu
to something else. I’m still receiving emails to @huydinh.eu
, but no service out there should have a current email address on that domain anymore.
I probably obsess way too much over having an apex domain (huydinh.eu
) for my website. (The lack of aliases on the apex domain is why so many websites redirect from the apex domain to a subdomain.) But that obsession, along with my other obsession of having a CDN between the client and whatever is actually serving my website, made it pretty difficult to find compatible providers.
It’s been more work than I thought it’d be, but I think I’m almost there. Some things had to change, because they didn’t work out the way I hoped they would, like Gcore’s CDN rules. But those changes probably are for the better, because my setup is less unhinged now (except for my DNS records, maybe…), and I have an actual server again instead of an S3 bucket plus Lambda functions.
Terraform
This migration also let me get familiar with Terraform, which I hadn’t really used that much before. My main reason for picking Terraform over something like OpenTofu is very simple: I received a very concise, easy to understand piece of feedback in my most recent round of 360 reviews at Solaris.
If I could make a wish, I would ask you to improve on the following skills/knowledge:
terraform
Sure, I know how to change something to something else in our Terraform setup. I might even be smart enough to copypaste something to create something new. But how exactly does everything work together in our Terraform repository? Uh, no idea; and I’m really open with telling people that I don’t know something when they ask me something to which I don’t know the answer. Someone may have found it noteworthy that a “Senior Engineer” has such a gap and felt like it was worth pointing out.
Don’t get me wrong, I appreciate that very simple and at the same time very actionable piece of feedback, and I’m working on it.
The nice side effect of doing things outside work and breaking them in the process: I get to write about them as much as I want to in whatever level of detail I want to.
Soon™.