Multimedia particles in the style of a tweet, also serving as a changelog to consolidate changes elsewhere in the site. Cross-posted to an atom feed. Frequently off topic.
Published fragment Rails World 2024, with a few reflections on this year’s event.
Published sequence 088, royal lion hunt.
I visited the British Museum in London during my stay there last year. The museum has a wealth of ancient artifacts, including some of the most famous ones in history like the Rosetta Stone, but despite having my camera with me, I took few photos while I was there. All I could think of was the tens of thousands of times each of these objects was photographed every day, contributing to an enormous body of billions of photos, 99.9999% would never be glanced again.
This is one of the few artifacts I photographed because I liked it so much. It’s artwork on stone depicting Assyrian royals taking part in a lion hunt, circa 645 BC, right around the period where the civilization would collapse.
At the time I knew almost nothing about Assyria, but a friend sent over the excellent episode “Empire of Iron” from Paul Cooper’s Fall of Civilizations podcast (also you YouTube). It starts describing how the Greek general Xenophon came across the ruins of two colossal cities as he was returning from a battle in 401 BC. We know now that these were the Assyrian cities Kalhu and Nineveh, but by then (about 200 years post-collapse) locals knew nothing about them, despite their far greater scope and sophistication than anything they could build at the time. It would’ve been like living amongst ancient ruins built by giants.
Published sequence 087, Transamerica.
Published fragment TIL: Variables in custom VSCode snippets, on using built-in variables in VSCode snippets to make publishing to this site incrementally faster.
Of mild interest, Stripe has announced a new API release process. Two named API versions a year will be released, named after plants (e.g. “acacia”), and presumably following an A-Z scheme similar to Ubuntu naming.
Previously, API changes roughly followed this procedure:
By my reading, the new scheme seems to be largely the same, except that non-breaking changes would be held for a monthly release on the current version, and breaking changes would be held for up to six months
I suppose the benefit of the new approach is that it gives users a more predictable cadence for breaking changes. Optimistically, maybe it gets them in the habit of updating their API version twice a year. Even more optimistically, maybe it starts to pave the path for a format deprecation lifecycle so that ancient API versions could eventually be retired.
Published fragment A few secure, random bytes without pgcrypto
, on avoiding the pgcrypto
extension and its OpenSSL dependency by generating cryptographically secure randomness through gen_random_uuid()
.
Published Real World Performance Gains With Postgres 17 B-tree Bulk Scans, in which we benchmark one of our API endpoints and get a 30% throughput improvement, with 20% drop in response time.
As long as you make heavy use of eager loading (which every serious application does to remove N+1s), Postgres 17 looks to be one of these releases where all you have to do is upgrade, and reap a major performance gain for free.
Published sequence 086, County Highway.
Published fragment Direnv’s source_env
, and how to manage project configuration, on how I accidentally stumbled across the source_env
directive and dramatically improved my configuration methodology overnight.
I pushed a new version of redis-cell today, a project that I still somewhat maintain, but only touch once a year or so.
While looking into another issue that someone had filed, I got the bright idea to update the project’s dependencies. That was a mistake, and I ended up sinking hours into fixing calls to the time
crate. It wasn’t just that a few breaking changes had been introduced – no, the entire API had changed, and every use of any function or type from the create had to be fixed. There was no upgrade guide.
I really want to like Rust, but something like this happens every time I go back to the language. This wasn’t some novel third party dependency that broke. It was time, one of the most core facilities of any programming language, and although the changes that broke me are older now, a cursory look at the project’s changelog shows that it’s regularly deprecating/changing API on recent versions.
Zero cost abstractions are cool, but you know what I like better? Stability.
After coming off the absolute blight on human consciousness that was The Acolyte, I found myself wanting to go back and watch the original Star Wars trilogy.
I was a teen when its “Special Edition” revisions were released, and I remembered that George Lucas had gone on record at the time saying that these were now the definitive versions of the movies. But that was decades ago, and I’d just assumed that the smallest modicum of rationality had won out since then, and HD versions of the theatrical releases had gone out. I mean – the menagerie of Jar Jar-esque CGI critters on Tatooine and Han walking over Jabba’s tail – it’s all so clownish that no one could possibly have stuck to that line. Right?
Wrong. I watched a few minutes of the latest Blu-ray release and it was painful. It’s all in there. Even in the 90s the CGI looked awful. Now, it’s a punchline.
Scrounging the web, I came across Project 4K77 (‘77 is when A new Hope came out), also hosting Project 4K80 and 4K83 for Empire and Jedi, where fans have scanned 35 mm film frame by frame to 4K resolution, and painstakingly cleaned up the whole collection to approach modern standards.
I watched a copy, and it was exactly what I was looking for. Not only is all the Special Edition garbage gone, but it looks considerably better than Lucasfilm’s Blu-ray restoration. It’s grainy, but left that way on purpose to stay true to the original theatrical release.
I’m at the point now that I’m pretending no Star Wars past the original trilogy exists. Who could possibly have guessed not only how badly the prequels would turn out, but that the sequel trilogy would be even worse, and TV follow ups down in the gutter with it.
Oh, and mercifully, Han shoots first.
Published fragment Elden Ring, on how I broke my promise never to give FromSoftware money again, and it was okay.
Golang Weekly notes that Go has jumped to the 7th position on the TIOBE index, which measures programming language popularity.
The rankings are still hard to believe (does anyone actually believe there’s more C++ development happening than JS/TS?), but even so, a positive sign!
I’ve updated The Two-phase Data Load and Render Pattern in Go after Roman pointed out that if we swap the position of two generic parameters in Render
, another generic parameter can be inferred, and every invocation gets considerably cleaner.
Previously, Render
looked like this:
func Render[TLoadBundle any, TRenderable Renderable[TLoadBundle, TModel, TRenderable], TModel any](
ctx context.Context, e db.Executor, baseParams *pbaseparam.BaseParams, model TModel,
) (TRenderable, error)
And was invoked like:
resource, err := apiresource.Render[*apiresourcekind.ProductLoadBundle, *apiresourcekind.Product](
ctx, tx, svc.BaseParams, product
)
In the updated version, the positions of the first two generic parameters are swapped:
func Render[TRenderable Renderable[TLoadBundle, TModel, TRenderable], TLoadBundle any, TModel any](
ctx context.Context, e db.Executor, baseParams *pbaseparam.BaseParams, model TModel,
) (TRenderable, error) {
And the function can now be invoked like this:
resource, err := apiresource.Render[*apiresourcekind.Product](
ctx, tx, svc.BaseParams, product
)
Much cleaner. A caller no longer even needs to know that the load bundle exists. At work I applied the fix to our hundreds of lines of existing calls, and the difference in readability is night and day.
River Python is shipped (with a huge assist from Eric Hauser, who contributed all the original code), enabling insertion of jobs in Python that will be worked in Go. It supports all the normal insert features including unique jobs and batch insertion, along with Python-specific stretch goals like exported type signatures, async I/O, and a @dataclass
-friendly JobArgs
protocol.
Here’s roughly what it looks like in action:
@dataclass
class SortArgs:
strings: list[str]
kind: str = "sort"
def to_json(self) -> str:
return json.dumps({"strings": self.strings})
engine = sqlalchemy.create_engine("postgresql://...")
client = riverqueue.Client(riversqlalchemy.Driver(engine))
insert_res = client.insert(
SortArgs(strings=["whale", "tiger", "bear"]),
)