Atoms

Multimedia particles in a similar style to a tweet, also serving as a changelog for this site. Cross-posted to an atom feed and Spring '83 board. Frequently off topic.

This is an archive page that shows every atom ever published. See also Atoms which shows only recent posts.

Go 1.20 is released. From the notes:

Work specifically targeting compilation times led to build improvements by up to 10%. This brings build speeds back in line with Go 1.17.

Excellent. New features are nice, but it’s the boring fundamentals like build times that really make a difference for everyday users. The addition of generics in Go 1.18 had an impact on build times, but those losses have been recouped. Generics are good. Fast builds are good. We can have both.


HBO’s The Last of Us televisation is our latest cultural zeitgeist, and I’m watching it like everybody else. Last night’s episode (ep 3) was the best so far, although it’s notable what they changed to appeal to a broader audience.

— SPOILERS —

Without going into the specifics, sufficed to say that Bill and Frank’s life in the game version is more antagonistic and stark. You can spoil yourself here, but the last piece in Bill’s ark is grim, and it’s not because he dies or is injured. He’s also more abrasive, and exchanges some great one-liners with Ellie as they insult each other. The TV version portrays a far more idyllic life and very gentle redemption ark. It plays on your emotions well and I liked it better overall, but it does some to detract from driving home the brutal edge of The Last of Us’s world.

By the way, I love how they kept the game’s military-style angle-head flashlights looped onto their backpack straps (see second image below). You can actually buy these things.


See: PEP 703 – Making the Global Interpreter Lock Optional in CPython.

Fascinating stuff, and the detail and attention that goes into these PEPs is just incredible. The GIL (global interpreter lock) has been the age-old Achilles Heels of Python and Ruby, and given the advanced age of these languages, I’d assumed they were problems that weren’t going to be solved as an increasingly intricate implementation makes more fundamental changes harder to tackle every year. But an increasingly parallel world of computing seems to have created demand to attract people with the necessary gumption to pull this off.

Ruby gave up on the GIL in favor of Ractors which suggest the use of many parallel environments, each with their own GIL. They might’ve worked, but were extremely backward incompatible, and two years later, nothing supports them.

Python’s optional GIL will be opt-in via compiler flag, but provides a path for the ecosystem to migrate incrementally so it could be default in a not-so-distant future.

A couple things that stood out to me. The implementation will move to a stop-the-world GC to freeze necssary state:

When running without the GIL, the implementation needs a way to ensure that reference counts remain stable during cycle detection. Threads running Python code must be paused to ensure that references and reference counts remain stable. Once the cycles are identified, other threads are resumed.

And because the GC becomes stop-the-world, its generational GC is disabled:

The existing Python garbage collector uses three generations. When compiling without the GIL, the garbage collector will only use a single generation (i.e., non-generational). The primary reason for this change is to reduce the impact of the stop-the-world pauses in multithreaded applications.

Single-threaded performance takes a 10% hit, but there’s already suggestions for changes to recoup some of that. Over time most of that likely gets optimized away.


Published sequence 039, on the site of the bird site.


Two years ago in the depths of San Francisco Covid-mania, I wrote up some predictions on what would happen to the city over the medium term. Today I published Revisiting my two-year SF predictions, and by my count, got 7 out of 9 right.

Some of these seem obvious in retrospect, but I wrote them down because they were starkly contrarian compared to claims made by the city’s commentariat. A popular opinion at the time was that tech needed San Francisco, so the longest and hardest lockdown in the nation would be perfectly okay as a closed downtown would spring right back to life on a mere word from London Breed, like a dog waking on command. Policy apologists guffawed with dramatic bemusement at the very idea that any of these departures could be permanent. Today, we see that almost every one of them was.

Network effects in cities are powerful, and that indeed has been the story of SF and the Bay Area writ large over the decade as companies swooped in soak up innovative energy, capital, and talent, which further compounded and cascaded. My prediction today is the city will continue to observe a reverse network effect as more companies give up expensive leases which have continually lessening ROI.


Published sequence 038. You look nice today.


Posted today: Stripe Sets One-Year Timetable to Decide on Going Public. Along with this classic Stripe press “leak”, the company also sent a simultaneously email to alumni, which honestly, was nice of them.

Predictably, there was a lot of carthasis from ex-employees, with even more chatter than during the 14% layoff a few months ago, and a hundredfold above ambient Slack levels. With the first batch of RSUs set to expire early 2024, there’d been a lot of consternation over the last couple years to say the least, so it was a big event.

It’s unambiguously a good thing, but I couldn’t help but notice that even with this BEST NEWS EVER message, the company is still tacking into its usual non-commitalism. Instead of unambiguously planning an IPO, it’s “either an IPO or private market transaction”, leaving huge error bars and uncertainty. Maybe something cynical, or maybe just unavoidable given the unpredictable market conditions of 2023. Hopefully, elite 4-D chess in pursuit of profitable ends that lowly grunts like myself aren’t privvy to, and couldn’t possibly understand.


Is this a food picture blog now? I hope not. I’ll try not to make a habit of it.

My curiosity was piqued when I found out there’s a Mensho subsidary in the ground floor of the Twitter building. Mensho’s a ramen company out of Japan that opened a shop called Mensho Tokyo a few years ago in San Francisco, notable for having the best ramen on this continent, and an omnipresent 50+ person lineup out the door and down the street, something I thought would clear with time, but never did. In 2021, they expanded into Twitter’s ground floor with a new installment – Jikasei Mensho.

Unlike the original, Jikasei Mensho’s more of a fast lunch stop sort of deal, and as far as I can tell, not great? You walk up to a counter, order from a computer McDonald’s-style, and your minimum tip option is 15% on a $22 bowl of fast food ramen for the privilege. It arrives in a plastic bowl. The chashu was good, the eggs were good, but the noodles and broth subpar, and some of furnishings questionable, like sliced lemons. Maybe I happened across it on an off day, or maybe ramen tastes worse out of plastic.

Post-Elon return-to-office Twitter is a bit more lively. There were quite a few people down in the cafeteria area where Jikasei Mensho is, where only a few months ago it all sat empty.


Published sequence 037 on the stairs at Oyster Point.


Witnessed a laptop snatching this morning. Woman is sitting next to window in a crowded cafe. Thief comes crashing in, grabs woman’s computer, and bolts. Five people run out after him, but as with a lot of criminal activity in San Francisco these days, it was organized, and a getaway car was waiting at the corner. Thief slams car door shut and rides off into the sunset (not literally; this was 11 AM).

It sounds blasé described post-hoc, but even “just” a theft with nobody hurt (luckily, it could easily have gone the other way) is hectic in real time. Downshifted mentally at the time in computing mode, it took a good ten seconds to process what’d even happened. People yelling. Tables knocked over. Coffee cups in pieces on the floor.

Whenever something like this comes up, San Francisco apologists are quick to make platitudinal statements like “report it to the police”, as if this is some kind of deep insight that only an enlightened California pro(re)gressive could have imagined. The situation was a perfect microcosm for why most people have stopped bothering. The SFPD was called immediately, and I waited around an hour without a single officer showing up, despite being within six blocks of a station, a cruiser rolling by every few minutes, and a situation that easily could’ve ended with someone seriously hurt. The crime had dozens of witnesses and clear HD footage from two separate cameras, but even if the PD did eventually appear, no one will be caught, let alone see a day in prison.

I started chatting with the guy next to me, who having just moved to the city a week previous was somewhat surprised. (But not that surprised, having gotten an accurate taste of the city’s culture from various viral videos.)

I told him that this wasn’t unknown, but not common. San Francisco, intent to preserve its reputation as deteriorating municipal hell stemming from the worst mismanagement of wealth in a thousand years, made sure to prove me wrong. Five minutes later, a woman walks in, grabs a handful of bills from the tip jar, and before anyone can react, leisurely strolls back out.


Published sequence 036 on 510 Townsend St.


I finally finished God of War Ragnarök this weekend. It’s a great game, with creative interpretations of Thor and Odin that turned out very well. Thor’s a lumbering giant who’s highly able, even if overly loyal to his father. Odin’s got mob boss vibes, which is unconventional, but a gamble that paid off. The game is huge, and I was surprised multiple times thinking I’d gotten to the end only to find a whole new world to explore.

I 100%‘ed it unlike the 2018 God of War, which meant that I wasted a ton of time following YouTube guides to find artifacts and ravens. I downgraded the difficulty to “give me grace” for the last two optional bosses and don’t feel bad about it. They spam attacks like no tomorrow, have a criminal number of unblockables, and the target lock system kind of sucks, especially in multi-enemy fights.

It’s notable that although very good, for all intents and purposes it’s the same game as the one released in 2018. There’s new story and new areas (although with some reuse), but the game engine, game mechanics, upgrades/skills system, and combat are all practically identical. Development for 2018’s started in 2014, which means that back then it took five years to build a full game and whole new engine, wherein Ragnarok was six for a game plus some minor updates. I’m sure some of that was lockdown delay, but I kind of suspect the game industry is hitting a plateau for large projects in the same way NASA did for space missions, Lockheed Martin for fighter jets, or Oracle for databases. Horizon Forbidden West (another triple-A title) was the same – great game, but practically indistinguishable from the original.


Exercises in verbosity: Loading data in Go is a heck of a lot of code

I’ve been writing Go professionally for something like a year and a half now, and compared to my previous daily driver Ruby, almost everything is better. Readability, speed of runtime, speed of tests, speed of refactoring, IDE insight, tooling – I could go on all day.

But when building out a larger-scale app with a non-trivial amount of domain logic and objects, it’s got holes. Some are as wide as Jupiter, and I can’t believe how little I see about them online.

The biggest I’m looking for an answer to is data loading, or more specifically, how to do data loading without thousands of lines of extraneous boilerplate. We solved our SQL-in-Go problem by moving to sqlc, but that in itself isn’t enough. I still write code like this daily:

team, err := queries.TeamGetByID(ctx, uuid.UUID(req.TeamID))
if err != nil {
    return nil, xerrors.Errorf("error getting team: %w", err)
}

if team.OrganizationID.Valid {
    org, err := queries.OrganizationGetByID(ctx, team.OrganizationID.UUID)
    if err != nil {
        return nil, xerrors.Errorf("error getting organization: %w", err)
    }

    if org.MarketplaceID.Valid {
        marketplace, err := queries.MarketplaceGetByID(ctx, org.MarketplaceID.UUID)
        if err != nil {
            return nil, xerrors.Errorf("error getting marketplace: %w", err)
        }

        return nil, apierror.NewBadRequestErrorf(ctx,
            errMessageTeamDeleteMarketplace, marketplace.DisplayName)
    }
}

In Ruby (or any language with some dynamicism and a good ORM), that whole block compacts comfortably to a single line of code:

raise ... if Team[req.team_id].organization.marketplace

And the Go version is only as short of it is because our foreign keys mean that we can skip handling some types of user-facing errors. i.e. We don’t have to worry about translating a missing organization to a 404 because foreign keys guarantee its existence. Sqlc also saves hundreds of lines – before the Go code for every one of those queries like TeamGetByID had to be written by a human and tested. Now we write the SQL and let sqlc do the work, but using Go built-ins database/sql you still get to do all of it.

Another bad-but-unavoidable Go pattern is preloading objects to avoid N + 1 queries, but then having to manually map them into structures that your code can actually use:

teams, err := queries.TeamGetByIDMany(ctx, sliceutil.Map(unsentInvoices,
    func(i dbsqlc.Invoice) uuid.UUID { return i.TeamID }))
if err != nil {
    return xerrors.Errorf("error getting teams: %w", err)
}

teamsMap := sliceutil.KeyBy(teams,
    func(t dbsqlc.Team) uuid.UUID { return t.ID })

Again, with an ORM this is:

Invoice.load(..., eager: [:team]).each { |i| i.team.name }

And the Go code was >2x longer just one release of Go ago. Notice the functions sliceutil.Map and sliceutil.KeyBy which were impossible without generics. Before, these were an initialization and a for loop – 4-5 lines each.

I’ve experimented with custom data loading frameworks that sit a layer above sqlc to reduce boilerplate:

err = dbload.New(tx).
    Add(dbload.Loader(&loadBundle.Cluster, req.ClusterID)).
    Add(dbload.LoaderCustomID(&loadBundle.Provider,
       req.ProviderID)).
    Add(dbload.LoaderCustomID(&loadBundle.Plan,
        dbsqlc.ProviderAndPlan(req.ProviderID, req.PlanID))).
    Add(dbload.LoaderCustomID(&loadBundle.Region,
        dbsqlc.ProviderAndRegion(req.ProviderID, req.RegionID))).

    // must be loaded after cluster
    Add(dbload.LoaderFunc(&loadBundle.PostgresVersion, func() *uuid.UUID {
        return &loadBundle.Cluster.PostgresVersionID
    })).
    Add(dbload.LoaderFunc(&loadBundle.Team, func() *uuid.UUID {
        return &loadBundle.Cluster.TeamID
    })).
    Load(ctx)
if err != nil {
    return nil, err
}

But so far it’s nowhere near enough – this one makes point loading by IDs a lot more succinct, but doesn’t handle anything less trivial like associated objects or one-to-many relationships.

I’m fairly sure that there’s nothing approaching an answer to this problem in the Go ecosystem. But aren’t there millions of lines of production Go out there by now? How aren’t more people running into this?


An article from 37 Signals on cloud spend.

They’re concerned enough about their AWS spend that they’re moving to their own hardware, a path paved before them by the likes of Dropbox and GitHub. It’s also always interesting when companies are transparent about their cloud spend. Hey (their email service) for example, costs $89k/month or $1.1M/year in AWS, with the biggest component being RDS, which eats a quarter of that.

The AWS bill is a creeping concern that tends to trend slowly upward over time without anyone really noticing, then suddenly jumps out of a bush to knife you in jugular. It in the beginning it’s “wow, look at all this stuff we’re getting for a few bucks a month”. Then a few years later it’s “what?! how many f* millions?? holy shit!!” It’s amazing how much money can be spent at $0.023 a gigabyte. At Heroku and Stripe we were eventually forced to engage in major AWS cost reduction projects, and in both cases the engineers involved ended up paying their own salaries many times over.


[3 months since day zero. The day Elon Musk bought Twitter, and the world ended.]

Know anybody who loudly quit “the bird site” in anger to go to Mastodon, and has stuck with it?

Me neither.


An essay: There’s no planet B, making the case that making another planet like Mars fit humanity is going to be so impossibly hard that it’s not going to happen, and by extension that the bulk of our energies should be focused on planet A, the one we’re already living on.

I’m reminded of Christopher Nolan’s Interstellar, in which he makes an effort to keep the movie relatively grounded, famously bringing on physicists as consultants, and even using contemporary shuttles and rockets in some parts. But even this “grounded” movie necessitates the sudden appearance of a nearby wormhole, and the invention of anti-gravity technology for the plot to work.

As cool as SpaceX is, it’s not even one percent of one percent of what humanity needs to colonize another planet, and appears to be right up against the envelope in terms of scale we’re able to tackle. Planet B isn’t happening, but its hope is the policy equivalent of deus ex machina, letting us rationalize current direction under the premise of a nebulous future savior.


Twitter’s holding an auction for assorted office furnishings that closes tomorrow at 10 AM Pacific.

Mildly entertaining: many items from Ohio Designs, a boutique San Francisco manufacturer, and the same one that we used to buy from at old Heroku to add to our office’s raw “loft” look. Disgustingly overpriced, but some of the most solid furniture ever created, with heavy steel frames that I wouldn’t bet against surviving a nuclear blast. And by extension, also impossible to move/lift. Their HQ are next door to Southern Pacific Brewing Company on Treat. I know that because at one point I became obsessed with these things and went over to inquire buying one for home, before settling on an alternative at CB2 with similar build quality at a quarter of the price.

A Twitter bird statue and Twitter neon light are sitting at $18k and $20k respectively, 10x their worth, and probably < 1x what Twitter paid for them. A far shot from $44B, but a tiny consolation prize, and notable for their part in the symbolism of selling off the worst excesses of the old era of tech.


In the off chance this helps someone else: I started building my own ImageMagick after the project changed their static binary to an AppImage, which doesn’t work very well.

As with all programs from that age, every feature is configurable, and it’s non-trivial to get it compiled with support for more unusual formats like HEIC. I thought I’d succeeded, but have for months been using a build that couldn’t read HEIC, and it’d only been working by virtue of me not uploading a HEIC in a while.

My ./configure invocation looked right:

./configure --with-heic=yes --with-webp=yes ...

But it’d been silently failing:

Delegate Library Configuration:
  ...
  Ghostscript lib   --with-gslib=no	 no
  Graphviz          --with-gvc=yes	 no
  HEIC              --with-heic=yes  no
  ...

Yes to HEIC, but actually … no.

There’s GitHub issues around with the same problem. It can have a variety of causes, but probably means that configure couldn’t load one of the dependencies for HEIC properly (libde265 and libheif).

ImageMagick’s build process silently swallows problems, so it’s necessary to go to configure.log to find out what happened. Digging into mine, I found that I was missing another dependency called dav1d, which turns out to be an AV1 decoder from the VideoLAN project.

configure:31871: $PKG_CONFIG --exists --print-errors "libheif >= 1.4.0"
Package dav1d was not found in the pkg-config search path.
Perhaps you should add the directory containing `dav1d.pc'
to the PKG_CONFIG_PATH environment variable

I fixed the problem with:

apt-get install libdav1d-dev

The FAA’s recent trouble with NOTAM caught my eye for a couple reasons:

  • It happened 12 hours after my flight out of Calgary. I hadn’t been too happy about our 4 hour delay, but it goes to show that things can always be worse.

  • FAA traced the outage to a damaged database file. Mongo, is that you?

I’m still holding out hope for a full Silicon Valley-style outage postmorterm, but from what we can glean, it sounds less exciting than you might hope. “Database” can mean a lot of things, and from the sound of it, this is more like a flat file.

A finding from yesterday was that the file had been corrupted by manual manipulation by a pair of contractors. There were supposed to be safeguards to prevent that from happening, but they didn’t. We also found out that in line with long tradition in operations, there had been backups, but when brought online, the backups were also found to be corrupted. Absolutely classic.


I watched Black Adam (2022). I don’t know why – mild curiosity? Masochism? Latent Dwayne Johnson fandom?

Let’s start with character vitals:

  • Strengths: Invulnerable, power of flight, super strength, super speed, teleportation, unassailable combat skills, shoots lightning bolts, literally a demigod.
  • Weaknesses: 5,000 years old, so occasionally makes endearing anachronistic comments that are lightly funny but not really.

It’s never clear what exactly the conflict is or why the viewer is supposed to care. As Black Adam wakes, a group of superheroes is sent to confront him, but it’s never in question that they’re on the same side, and they only fight it out to prove the narcissism of small differences. Later there’s a token bad guy, and as you might be able to imagine, it’s a huge mystery as to whether the combined might of Black Adam and his new friends at the Justice Society will be enough to take him down.

There was a long period of MCU golden age over the last ten years where I thought that Hollywood had figured it out – the perfect formula of fan service, special effects, and quippy dialog that was reliably reproducible to make a film that fans liked and would land a solid ~8 on IMDB. Apparently not, or the at least the DC studios never got the memo.

Don’t worry, this hasn’t become a movie blog. Tomorrow, back to your regularly scheduled programming.


A light movie recommendation: Triangle of Sadness (2022).

I didn’t know much of anything about it going in beyond “a couple, both young models, go on a cruise with wealthy elites”, which is the exactly the right amount to know about it. The movie takes some major turns throughout, and it’s very enjoyable if you don’t know they’re coming at all (I didn’t, but have since read some synopses that would’ve spoiled them). It’s a bit artsy and a black comedy, so it won’t be everyone’s cup of tea, but it punches well above its IMDB 7.5 and distantly above its 63% RT (can anyone trust film critics anymore?).


This one’s a few years old, but a nice essay on Where to live, a question I’m struggling a lot with right now (and for the last five years). A stranger will never be able to answer something so important for you, but while searching for insight, cast your net wide.


Anyone remotely adjacent to the tech industry will know about the Sam Bankman-Fried saga by now: the collapse of FTX as the biggest Ponzi scheme of all time, subsequent world podcasting tour by Sam to broadcast his innocence, arrest in the Bahamas, extradition to the US, release on $250M bail, and return to his parents’ home in Stanford, back on Twitter and League of Legends until October.

Consuming pre-exposure material on Sam qualifies as its own genre of entertaining dark comedy. The broad themes are always the same – hosts laud his genius, ask no real questions, and make dozens of grand assertions with absolute confidence.

Highlights of Acquired’s FTX episode:

  • In the hosts’ own words, FTX’s 10x growth in valuation for 2021 was quote “unbelievable”. Literally correct.

  • Sam is not shy about congratulating his own brilliance, but the hosts do most of the work for him, unprompted. He’s compared to Jeff Bezos 3-4 times, and not in succession. The comparison is resurrected again and again to drive it home.

  • A hearty laugh is shared as Sam jokes “all you have to do to beat Mt. Gox is not lose everyone’s money”. So on the nose.

  • He was thinking about leaving his suit at his brother’s in DC because except for wearing it for congress, he didn’t know when he’d use it again. Good thing he didn’t, having since gotten good mileage out of it in the Bahamas and NYC.

The greatest moment is when Sam is bestowed the appellation “the zeitgeist artist of our time”, meaning that he understands today’s confluence of culture and technology better than anyone else on Earth.

See also Sam on Unusual Whales, post-exposure. He’s typically evasive, but the last 13 minutes (after Sam leaves) is frustrated editorial from panelists, and excellent.


This is simply excellent: Production Twitter on One Machine? 100Gbps NICs and NVMe are fast.

The article isn’t suggesting that Twitter actually do this, but explores how possible it’d be to run Twitter on one really big server. It shows its work, and contains dozens of Fermi estimates based on best available public data:

Now we can use this to compute some sizes for both historical storage and a hot set using fixed-size data structures in a cache:

tweet avg size = tweet content avg size + metadata size => 176 byte
tweet storage rate = avg tweet rate * tweet avg size in GB/day => 88 GB/day
tweet storage rate * 1 year in TB => 32.1413 TB
tweet content fixed size = 284 byte
tweet cache rate
    = (tweet fixed size + metadata size) * max sustained rate in GB/day 
    => 251.7647 GB/day

Doesn’t every programmer secretly love the idea of a mainframe? One giant machine that runs everything and has its own redundancy internally. Ensuring scalability through processes designed to be run in parallel is obviously more practical and more robust nowadays, but if you were to try and run Twitter on one machine, you might be able to get results that aren’t too much worse, and with 1000x less infrastructure.

It wasn’t too long ago we were really trying to do this. At iStock circa 2011 where the ops team was running the asylum, right around the end of my tenure we purchased a huge mainframe-esque box that advertised being able to stay online even if one of its CPUs failed. I was never sworn in on how much it cost, but undoubtedly the price tag would’ve made my eyes bleed. That was right around the period where misguided attempts to scale vertically and racking your own specialized hardware was already starting to look pretty silly, so I’m curious in retrospect how far it made it into production.


Published sequence 035.

This turned out to be a bit of a yakshake as I found out that despite a lot of work adding support to this site for various types of image formats over the years, it still wasn’t well set up for .heics that come out of an iPhone. ImageMagick resizes them, but browser support sits comfortably at 0% so it’s not a format you want to use on the web. I added some code to the build process convert them to WebP (96% support), but that in turn required sizable refactoring to support outputs with an extension different than their input.

I should investigate using WebP as a general target regardless of the input format since it overall seems like a better default that produces better results by default compared to mucking around with JPGs and PNGs. But it’s not super pressing because although WebPs will produce considerably better compression than most JPG exports, they’re not too much better than a JPG that’s been optimized via MozJPEG, which this site does.


Published Content authored by ChatGPT front pages, wherein we see empirical proof that an article generated by ChatGPT is good enough to get all the way to the front page of Hacker News before anyone notices.


A guilty pleasure is reading other peoples’ multi-thousand word reviews on Leica cameras. From today: Fujifilm to Leica which goes into detail in rationalizing buying an M11.

I hate that I love these cameras. At Heroku we had a few vocal Leica users. One went as far as to sell all his equipment to simplify his life on a single Leica rangefinder and lens (an M9). But like the Leica brand itself, the move wasn’t about practicality or technical superiority, it was about romance. Leica doesn’t sell cameras, it sells lifestyles, and fans do 80% of the work for them. See also sh*t Leica photographers say.

Years later I’d pick up a Q, an amazing camera. The best that Leica’s ever produced. But once you’ve dipped your toes in, you’re forever haunted by the siren call of the M-series.

M cameras are a rip off ($9k for an M11 body + $6k for a lens). Everybody knows they’re a rip off. No honest person even tries to argue they’re not a rip off. But they’re well-designed, and again, supremely romantic.

I check in every so often, and luckily, have held the line. Besides the price tag:

  • Rangefinders were always questionable, but EVFs made them obsolete. This might be debatable, except we have proof that Leica knows it too. The camera doesn’t come with an EVF, but Leica won’t hesitate to sell you one. For $750.

  • $15k. Not weather sealed.

  • Questionable software. Slow start up time.

On the plus side, the M11 did away with the bottom plate, a feature that’d been left in until the M10 for historical reasons, but completely impractical in every way. You can’t beat full frame in a body that size, or the minimalist controls.

Leica must send $100M/yr Fuji’s way to keep cameras on APS-C and with at least six unnecessary dials each. It’s the only explanation for such magnanimity in staying out of the competition.


Marc Andreessen:

The most common event in our times: An expert with power/authority/influence is wrong in full view, yet suffers no consequences.


Contrary to popular belief, it doesn’t rain that much in San Francisco. A few dozen times a year it does. Very occasionally, it rains an unusual amount. Today it rained 0.83”, which is kind of a lot, I guess.

Local media and Twitterati (now newly christened armchair meteorologists), not ones to let a kinda-maybe-almost-but-not-really catastrophe go to waste, dramatically dubbed this THE BOMB CYCLONE. In a flashback to her greatest hits of 2020, London Breed heroically ordered residents to stay home.

By comparison, during an actual adverse weather event like hurricane Ian in Florida last year, the state saw rainfalls of 10 to 20” over a four-day period, ~3 to 6x daily what SF saw today. Tokyo’s rainiest month of October averages 9.24” over ~12 days of rain. In other words, the poor Japanese are dealing with twelve THE BOMB CYCLONES just in the month of October, every year.

See also “our new era is characterised by an unending self-reinforcing cyclone of hyperventilation journalism, the likes of which we’ve never seen before. […] we will careen from one crisis to the next.”


Gap puts their corporate HQ in San Francisco up for lease or sale, notable due to the size of the lease (162k sqft), but also because it’s historically one of the city’s preeminent companies, founded, headquartered, and proudly operating out of San Francisco.

Only months ago San Francisco’s pundit class decried “misinformation” that Gap had shuttered operations in SF, despite having clearly closed their flagship store on Powell (which remains an iconic SF empty storefront to this day) and corporate office for Old Navy. Gap, they said, is still very much in the city, you stupid conspiracy theorists. But as the facts become indisputable, the strategy shifts from attack to denial – none of this is happening. Downvote, misdirect, ignore.

Gap’s departure is the latest blow in a long series of insults to San Francisco’s bottom line. Office vacancy is at a 30-year high, large real estate holdings are requesting reassessments to reflect decreased value, and even other SF darlings continue to flee (Salesforce announced a 10% layoff round this morning along with further office space reduction).

The deficit, currently a mere $728M, grows.


I’m trying to read James Joyce’s masterpiece Ulysses, again, fast becoming an annual tradition in humiliation and failure. I’ve tackled my fair share of literature – Dostoevsky, Dickens, Kafka, Vonnegut, Hemingway – and for my money, Joyce and Ulysses are on a level of their own, with allusions, puns, and classical references so rich they’ll leave you reeling, and wordplay so clever that it’s a wonder a human could write it.

But I’ve never in my life come across a book that’s so thoroughly and so totally unreadable. The prose is heavier than a pallet of bricks. A few years ago I tried to cheat my way to success by getting it in audiobook form, only to be foiled by the narrator’s dramatic Dubliner accent, which while appropriate, made things even worse. I gave up after less than an hour.

Back to written form, and 10% through. I’ll make it, some day.


English language Stack Exchange bans ChatGPT-generated answers, taking a step beyond what the network as a whole has committed to.

For years the concerns from public intellectuals (few of whom are AI or computer specialists) over AGI have felt overblown to me, but ChatGPT’s got me officially worried. Not that AGI’s going to take over the world and enslave us all, but rather that we’re about to smash headlong into a creative wall in communication and writing as ChatGPT becomes the perfect remixer, able to endlessly recycle new permutations of the existing bulk of human output in a way that’s convincing enough for us to accept. Never able to produce anything interesting or new, but with few people who notice since we spend our days consuming human-regurgitated remixes anyway.

I’m expecting more announcements like Stack Exchange’s over the coming months. Discerning humans can probably distinguish ChatGPT output for their chosen field, for now, but it’s certain to make problems in everyday life. For example, search engine spammers taking their game to a new level with AI-produced filler far more convincing than the Markhov Chain or plagiarism techniques they use today. Firms using it for frontline support to lead customers in endless rhetorical circles even more frustrating than the precanned answers and telephone mazes of today. Phishing seniors via email with previously unimaginable sophistication.

You can’t put this genie back in the bottle. If we don’t make it, China will. Regulation’s a tempting idea, but lawmakers, barely able to sound even partially articulate in hearings over comparatively simple issues like social media, are wholly unequipped to tackle such a complex subject. The only practical answer is supremely unsatisfying: wait, and see.


Bloomberg reports how Shopify is canceling recurring meetings with more than two people to kick off 2023.

Tobi’s taking a lot of flak over it from middle managers on HN (my favorite from the high drama category: “Tobi has done this a couple times […] it comes from a place of narcissism”). I think it’s a great idea. In the early 2010s at Heroku I’d started to think that at least as far as tech was concerned, meetings were a solved problem. It was an engineering-driven culture, and most engineers hated them, so we’d schedule as few as possible, with only a few key ones per week. But then I got to Stripe and realized that I’d been living in a bubble – an uncomfortable truth of the universe is that most people love meetings, even if they think they don’t, and doubly so for well-pedigreed Stanford types most likely to work at top Silicon Valley firms. Far from being an element of change, tech was only the latest continuation of a long tradition in big American business. Every satirical pane from 90s Dilbert cartoons held perfectly true.

Based on testimony from Shopify employees, Bloomberg gets a few points wrong:

  • Meeting cancellations aren’t permanent. The move wipes the slate clean, but important meetings can be put back on the docket.

  • It’s not the first time Shopify’s done it. One person says that these events were originally aimed to occur every rand(600) + 300 days.

Targeting recurring meetings specifically is the right move. These get slapped onto calendars, often for large groups, and their tendency is to stay after their inertia’s built. Before long no one can even remember a time without them, and it’d be heresy to challenge their existence. Ending recurring meetings by default shifts the burden of proof to the right place – it’s now on middle management to re-justify their existence rather than falling to a maverick engineer who has to burn social capital in a risky attempt to argue for their discontinuation.


Occasionally, especially when thinking about past around the holiday season, I’ll use the Way Back Machine to take a look at sites which I used to consider the absolute pinnacle of great web design back in the day. Today’s specimen is A List Apart, a site that writes about web design. Screenshot of a 2009 capture below.

Some things that come to mind:

  • We all loved small fonts back then. The body text here is ~8pt Verdana which is a style I used all over the place back in the 2000s. I’d hazard a guess that the reason a lot of us liked them is that larger fonts didn’t look anywhere near as good as they do now, before the advent of subpixel anti-aliasing and retina displays.

  • A key element that makes the web today look as good as it does are fonts, which render so beautifully nowadays that it makes the web’s default look quite good, even where minimal design has been applied. Old sites should benefit from these advancements, but it never seems to translate through. Their fonts don’t look great and it makes the whole site not look great.

  • The broad strokes of this old layout are excellent, and better than alistapart.com’s current site. The specifics are a bit dated and it needs niceties like responsiveness, but a little modernization would easily bring it up to date. The current era of web design is too quick to remove any and every extraneous element, and although the baseline product is decent, 2022’s internet looks uniform and somewhat bland.

  • Notice all the text in images? (e.g. in the logo, “An event apart”, “A book apart”, etc.) The reason people did that is that it looked better than what the web browser could render at the time. Programs like Photoshop would yield additional anti-aliasing beyond the browser’s discrete, blocky pixels. It’s ironic that transported through time, text that was put in images to look better now looks much worse.


When all you have is a hammer (access to free Heroku apps), everything looks like a nail (dynamic apps + buildpacks for everything). On the eve of November 28’s Herokupocalypse, I pulled the static rip cord and dropped the dynamic component from apps that should’ve been static HTML pages from the get go. The totality of tooling required: Chrome and a text editor.

See also Nanoglyph 022: Time and Entropy.


Happy new year!

I always love these first few days of a new year. A fresh slate, after a week off, batteries recharged, and optimism for the year ahead. I’ve spent the last few days doing little else beyond writing, some casual coding, and running. It’s been great. Despite some of the most uncertain times in which I’ve ever lived, I’m feeling a lot better about 2023 than 2022.


Wrote a few notes on publishing iOS live photos, involving exporting a .mov from iOS and running it through ffmpeg to fix aspect ratio, scale, strip audio, and convert to web-friendly codecs. Taking live photos is a nice alternative to shooting in video mode because you can have them captured by default, so there’s a lot less mucking around with start/stop.

A major takeaway: you can make a <video> behave like a classic animated GIF by having it autoplay and loop, but only by also including the muted option, which browsers require before allowing autoplay to prevent us all from losing our minds.

<video autoplay loop muted playsinline>
    <source src="/videos/girm632/cougar-1.mp4" type="video/mp4">
    <source src="/videos/girm632/cougar-1.webm" type="video/webm">
</video>

playsinline is another magic Apple token that tells an iPhone that it’s okay to autoplay a video.


Eugyppius has a way with words:

The pre-pandemic world is gone forever. If the past year has taught us nothing else, it has taught us that much. Mass containment has permanently transformed our societies and our cultures. It has cemented the cooperative relationship between the regime and the press, and it has changed the content and the tenor of our media. Drama and panic have always sold newspapers, but our new era is characterised by an unending self-reinforcing cyclone of hyperventilation journalism, the likes of which we’ve never seen before. For the foreseeable future, I think, we will careen from one crisis to the next.

Truer words have rarely been spoken. The whole edition is well worth reading.


The Calgary Zoo yesterday, somewhere I haven’t been in years. The grounds feel smaller now, but the animals are impressive, and we got lucky with many out in the open. Amongst others:

  • Extraordinary penguin exhibit – lots of them, very active, and multiple locations indoors and out. (I wish I’d gotten some photos, but was holding coffee at the time.)
  • Cougars jumping, climbing, and very lively. (The polar opposite of the snow leopards.)
  • A gorilla couple with a young baby that attached itself to the mother’s arm as she roamed.
  • Two Komodo dragons, with one hanging out right in the open a few feet from the glass.
  • A pack of wolves circling energetically within inches of the observation perch.
  • A troupe of Japanese macaques, which I haven’t seen since visiting Japan. (Too bad there wasn’t an onsen.)
  • An enormous Malayan tapir.
  • The biggest Bactrian (two hump) camel any of us had ever seen. What a beast.

Although not as cold as last week (-30C), it was chilly out (-3C), and I’ve never been so thankful to for the butterfly conservatory’s hot, humid interior (currently minus the butterflies for the winter season) for recharging body temperature.

I played around with exporting some iOS live photos and running them through ffmpeg to make some VP9/.webm shorts. It worked reasonably well, despite the idiosyncrasies of the <video> tag.


Follow up from yesterday’s note on atom reslugging: Short, friendly base32 slugs from timestamps, discussing how unique slugs are generated for each atom on this page.

A published timestamp is changed to bytes then encoded to base32 similar to RFC 4648, except with digits leading so that output slugs sort in the same order as the input timestamps. Implemented in ~4 lines of Go:

var lexicographicBase32 = "234567abcdefghijklmnopqrstuvwxyz"

var lexicographicBase32Encoding = base32.
    NewEncoding(lexicographicBase32).
    WithPadding(base32.NoPadding)

func atomSlug(publishedAt time.Time) string {
    i := big.NewInt(publishedAt.Unix())
    return lexicographicBase32Encoding.EncodeToString(i.Bytes())
}

In the unlikely chance that you happen to be following this in an RSS reader, you just got dump of every atom written so far all over again. Sorry about that. It happened because I “reslugged” (assigned a new URL-friendly identifier) them all, making your reader think each was new content. I normally wouldn’t make a breaking change like that, but there was an improvement opportunity I couldn’t resist, and the project is only a few days old, so I went for it. It’ll be the last time something like that happens.

Each atom’s slug is its published timestamp converted to bytes and base32-encoded (e.g. this atom’s is giqmrd2). I changed the encoding character set from the normal abcdefghijklmnopqrstuvwxyz234567 to my own numbers-first variant of 234567abcdefghijklmnopqrstuvwxyz. The reason I did that is so base32-encoded slugs always sort lexicographically the same as their source timestamps, which is quite a convenient property. More on this in a short piece tomorrow.


I updated my now page for the first time in almost three years. It may have been an awful couple years, but at least a great week for this site?


Write as often as possible, not with the idea at once of getting into print, but as if you were learning an instrument.

J.B. Priestley

My prose output in 2022 was way down (not counting commit messages and Reddit comments), and I feel this viscerally. The less you do it, the harder it gets, leading to a vicious negative feedback loop. Come 2023, I want to invert that cycle. Write more and more often, and gods willing, make it better, and easier.


Published fragment: Easy, alternative soft deletion: deleted_record_insert.

We’ve switched away from traditional soft deletion using a deleted_at column which is wound into every query like deleted_at IS NULL to one that uses a schemaless deleted_record table that’s still useful for debugging, but doesn’t need to be threaded in throughout production code, and razes a whole class of bugs involving forgotten deleted_at IS NULL predicates or foreign key problems (see Soft deletion probably isn’t worth it).

I recommend the use of a generic insertion function:

CREATE FUNCTION deleted_record_insert() RETURNS trigger
    LANGUAGE plpgsql
AS $$
    BEGIN
        EXECUTE 'INSERT INTO deleted_record
          (data, object_id, table_name)
        VALUES
          ($1, $2, $3)'
        USING to_jsonb(OLD.*), OLD.id, TG_TABLE_NAME;

        RETURN OLD;
    END;
$$;

Which can then be used as an AFTER DELETE trigger on any table:

CREATE TRIGGER deleted_record_insert AFTER DELETE ON invoice
    FOR EACH ROW EXECUTE FUNCTION deleted_record_insert();

Our results have been shockingly good. No bugs to speak of, no reduction in operational visibility, and less friction in writing new code and analytics.


I wrote some meta-commentary about this Atoms list in a fragment: The Unpursuit of clout.

The broad thesis is that publishing here is a little harder than publishing on Twitter (it involves a macro to insert an entry in a TOML file and doing a git push), but over the last few days which are this project’s lifespan to date, I’ve been finding it easier. With no favs, likes, subtweets, or comments, you’re not performing for anybody, and not gaming any metrics. Just write, publish.


As is traditional, Ruby 3.2 was released on Christmas day. The big news is that Shopify’s YJIT (Yet Another Ruby JIT) engine is out of experimental status and now considered production ready, having been battle-tested at Shopify for a year. As CEO Tobi notes:

Very good chance that YJIT is now running more ruby net code than any other VM. Shopify storefronts are a sizeable percentage of all web traffic!

Benchmarks show a performance improvement of ~40% over CRuby, so it’s exciting for all Ruby users, including us. I can’t imagine not looking into switching to JYIT in the new year.

Alongside Sorbet, Stripe was also working on a JIT, but with YJIT mainline, I think it’s safe to say that theirs stays Stripeware (if it’s even still being developed). Shopify’s demonstrated a healthier model for working with open-source projects – by maintaining close connection with the core teams (including Rails as well), their work goes upstream, and comes under the umbrella of collective maintenance. Stripe’s projects are in danger of ending up more like Facebook’s Hack), diverging far enough from the trunk to become a separate ecosystem.

Also nice to see is that YJIT is written in Rust, after a successful port in April. This in opposition to Stripe’s commitment to a C++ toolchain, and likely to keep the project more maintainable and more sustainable (and hopefully extend these properties to Ruby itself as it makes inroads to core, which is heavy C).


I wrote a mini-review for The Way of Water, the latest in the Avatar franchise. Go for the cutting edge CG of blue aliens and Pandora whales, stay for the 22nd century Spruce Goose-esque flying boat, crab mechs, and v2.0 exoskeletons.


The latest Twitter files by way of David Zweig, on the suppression of Covid information (and actually, read this long form version instead).

A few days ago Elon Musk appeared on All-in, and put it best: “all the conspiracy theories about Twitter turned out to be true”. It was apparent to anyone paying attention that Twitter was censoring true statements inconvenient to the story being told by Fauci and the White House, despite vehement denial from all parties involved. As with previous iterations, the Twitter files aren’t about knowing a murder had taken place, but rather about finding the smoking gun with fingerprints intact.


A thought-provoking interview with John Mearsheimer, a decorated political scientist, and rare dissenting voice on the war in Ukraine. He believes there’s an imminent danger of escalation, and makes the case for suing for diplomatic resolution.


In San Francisco, properties assessed at a total of $59 billion have requested resassessment in 2022, having correctly recognized that owning downtown isn’t as valuable of a proposition as it was in 2020. Adjustment would bring the total down to $26 billion, translating to $308 million in lost property tax revenue should all their reassessments succeed. Add that to the already identified $728 million hole in the budget over the next two years.

Claimed by the article: San Francisco is slow to return to work due to a high concentration of “tech and professional services”.

Omitted from the article: after spending a decade spent increasing tax and slathering on red tape, San Francisco locks down earliest in the nation, and for the longest, keeping major restrictions in place until just a few months ago. Add decades worth of obstructionism having contributed to some of the priciest rents and real estate on the planet, in a major surprise young workers didn’t find it a compelling place to hunker down for three years of life in stasis, and businesses didn’t find it compelling to sit on three years worth of empty offices.

The good news: San Franciscans achieved their final victory in driving out The Bad People. The bad: rents are as high as ever, the budget is a smoking crater in the ground, and the 70s didn’t come back.


The other day I found that my automated job to cross-post to a toy Spring ‘83 board started failing, with the reason being that my last sequences entry was more than 22 days ago. Not good – I’d intended to keep them more up-to-date than that – but I don’t always have anything new to post.

I’ve always like the idea of .plan files, tiny plaintext files that live in a user’s home directory and which would be dumped using the finger command on a target user. Famously, John Carmack published them for more than a decade. Here, in a similar spirit to .plan, introduce “atoms”, tiny multimedia particles that are fast and easy to write. Along with sequence entries, they’re cross-posted to Spring ‘83, and will also live at /atoms.


Xmas morning. A signature gift was a new Zojirushi rice cooker, so I’ll graduating from cooking rice in my old steamer (even if you don’t recognize the name, their elephant logo is conspicuous worldwide).

Zojirushi reminds me of the old Heroku office at 321 11th St, where we had a number of their vacuum boilers. They’d keep water at perfect coffee-brewing temperature, ready to make a Chemex pourover at any moment.