Multimedia particles in the style of a tweet, also serving as a changelog to consolidate changes elsewhere in the site. Cross-posted to an atom feed. Frequently off topic.
This is an archive page that shows every atom ever published, and which will grow to monstrous size. See also Atoms which shows only recent posts.
Published sequence 092, Tori Bar.
This one was new for me, Tori Bar in Calgary’s Inglewood, a tiny place serving Japanese yakitori off a single grill, the skewers cooked right in front of your eyes. We ended up ordering roughly 3⁄4 of the menu, I can heartily recommend the tako wasabi (octopus) starter, skin-on-thigh yakitori, pork belly, and shishamo (smelt, a tiny fish served well done).
Published sequence 091, Fair’s Fair.
I dropped by Fair’s Fair in Inglewood (Calgary) yesterday. It’s a large, rustic bookstore with bare bone furnishing, some real vintage items (in the sense of old rather expensive), and a big sci-fi/fantasy section. I’ve been coming to this place since I was a kid (when used book prices were measured in cents rather than dollars), and was glad that it’s the same Fair’s Fair in spirit as it was all the way back then.
Published fragment ERROR: invalid byte sequence for encoding UTF8: 0x00 (and what to do about it), on handling a common programming language/database asymmetry around tolerance of zero bytes.
Truth is treason in an empire of lies.
— Ron Paul
Like a lot of amateur photographers, I’ve had a fascination with fast lenses for quite some time. I only learned recently though that camera companies have been making very fast lenses as early as the 60s. See the Canon 50mm ƒ/0.95 “Dream Lens” for example.
While lenses in this vein don’t compare favorably with modern optics on any objective dimension like sharpness, distortion, vignetting, etc., they produce some really pretty bokeh/out-of-focus effects. These days, more artistic statement than pragmatic utility.
So far I’ve managed to restrain myself and have never bought an L-mount camera, but I’m tempted every time I see this sort of thing.
See: Minimalissimo: 2009 → 2024.
This is one of the first websites I ever added to my RSS reader, and I must’ve done so right around 2009. It was always one of the good netizens: no ads, popovers, or gimmicks, publishing full posts to RSS, and high quality, on topic content.
In a separate post on his blog (a gorgeous website by the way), Minimalissimo’s curator Carl Barenbrug talks about the site’s ascendant trajectory:
Between 2011 and 2015, minimalissimo.com was undoubtedly the most read and respected minimal design blog on the web. Sure, there were other massive publications that dwarfed our little site, but within our niche, no site offered the same level of consistent quality in curation. We were easily hitting over 100k unique visits to the site per month. At this point, we had accumulated over 6 years of posts spanning a wide variety of art, architecture, and design, so it felt like a natural step to evolve Minimalissimo into both a printed and digital publication. Particularly as print was thriving at the time. This then led to a trio of self-published magazines that each sold incredibly well. I’m still massively proud of those volumes and you can still get your hands on the digital versions today.
And in later years, challenges with an internet flooded with cheap content and ever more centralized:
By 2019, the volume of design blogs and magazines on the web was huge. Many began to look the same and we began to notice so much recycled content. Curation was becoming increasingly challenging if we wanted to maintain distinctiveness. On top of this, social media was very quickly eating away at indie websites like a plague. Our readership was declining year after year, and the pressures of pumping more energy into social media platforms to be noticed and relevant was a huge time sink. And it stunk. Much like the algorithms we’d all have to navigate in the years that followed. We were fighting a losing battle.
Published fragment the parallel test bundle, a Go convention that we’ve found effective for making subtests parallel-safe, keeping them DRY, and keeping code readable.
type testBundle struct {
account *dbsqlc.Account
svc *playgroundTutorialService
team *dbsqlc.Team
tx db.Tx
}
Hetzner is launching a new Object Storage storage on November 1st, aimed squarely at competing with S3 (and is S3 API compatible). Their pricing page is quite verbose, so here’s a summary by my interpretation:
Storage | Transfer out | Operations | |
---|---|---|---|
AWS |
|
|
|
Hetzner |
|
|
|
Observations:
eu-central
region, so US-based customers will have to deal with speed-of-light latency.On the face of it, pretty exciting. An optimistic premise of major cloud providers like AWS is that as they reduced their own operation costs through economies of scale, some of those savings would be passed down to us, but we’ve seen over the years that this rarely happens, even as the cost of hardware storage has steadily decreased.
I have to think that Hetzer’s prices are closer to what S3 would cost, were it to be launched today.
I’m demoing a new version manager. I’ve been on asdf for a few years. I often use IRB (Ruby’s interactive interpreter) for basic calculations since Ruby makes such a good scripting language. When doing so today, for about the 1,000th time, I got this:
$ irb
No version is set for command irb
Consider adding one of the following versions in your config file at
ruby 3.2.1
This sort of lazy error bothers me:
I was told the other week that Mise does a better job of default behavior and descriptive error messages, so I’m giving it a shot.
So far, so good. Easy installation and configuration in my ~/.zshrc
. I asked it to install Ruby, and it just did it. No additional plugin needed to be installed, and no specific version was required:
$ mise install ruby
mise ruby@3.3.5 ✓ installed
When entering a directory with .tool-versions
, it told me immediately that a tool was missing:
$ cd owl
mise missing: ruby@3.3.4
A simple mise install
walked me through installing all dependencies, including some unusual ones like Crystal, and doing so interactively so I could skip any that I didn’t want:
$ mise install
mise ⚠️ crystal is a community-developed plugin – https://github.com/asdf-community/asdf-crystal
Would you like to install crystal? Yes
mise ⚠️ postgres is a community-developed plugin – https://github.com/smashedtoatoms/asdf-postgres
Would you like to install postgres? No
mise ruby@3.3.4 ✓ installed
mise crystal@1.6.2 ✓ installed mise run with --yes to install plugin automatically
mise asdf plugin postgres is not installed
mise Run with --verbose or MISE_VERBOSE=1 for more information
Published fragment Rails World 2024, with a few reflections on this year’s event.
Published sequence 088, royal lion hunt.
I visited the British Museum in London during my stay there last year. The museum has a wealth of ancient artifacts, including some of the most famous ones in history like the Rosetta Stone, but despite having my camera with me, I took few photos while I was there. All I could think of was the tens of thousands of times each of these objects was photographed every day, contributing to an enormous body of billions of photos, 99.9999% would never be glanced again.
This is one of the few artifacts I photographed because I liked it so much. It’s artwork on stone depicting Assyrian royals taking part in a lion hunt, circa 645 BC, right around the period where the civilization would collapse.
At the time I knew almost nothing about Assyria, but a friend sent over the excellent episode “Empire of Iron” from Paul Cooper’s Fall of Civilizations podcast (also you YouTube). It starts describing how the Greek general Xenophon came across the ruins of two colossal cities as he was returning from a battle in 401 BC. We know now that these were the Assyrian cities Kalhu and Nineveh, but by then (about 200 years post-collapse) locals knew nothing about them, despite their far greater scope and sophistication than anything they could build at the time. It would’ve been like living amongst ancient ruins built by giants.
Published sequence 087, Transamerica.
Published fragment TIL: Variables in custom VSCode snippets, on using built-in variables in VSCode snippets to make publishing to this site incrementally faster.
Of mild interest, Stripe has announced a new API release process. Two named API versions a year will be released, named after plants (e.g. “acacia”), and presumably following an A-Z scheme similar to Ubuntu naming.
Previously, API changes roughly followed this procedure:
By my reading, the new scheme seems to be largely the same, except that non-breaking changes would be held for a monthly release on the current version, and breaking changes would be held for up to six months
I suppose the benefit of the new approach is that it gives users a more predictable cadence for breaking changes. Optimistically, maybe it gets them in the habit of updating their API version twice a year. Even more optimistically, maybe it starts to pave the path for a format deprecation lifecycle so that ancient API versions could eventually be retired.
Published fragment A few secure, random bytes without pgcrypto
, on avoiding the pgcrypto
extension and its OpenSSL dependency by generating cryptographically secure randomness through gen_random_uuid()
.
Published Real World Performance Gains With Postgres 17 B-tree Bulk Scans, in which we benchmark one of our API endpoints and get a 30% throughput improvement, with 20% drop in response time.
As long as you make heavy use of eager loading (which every serious application does to remove N+1s), Postgres 17 looks to be one of these releases where all you have to do is upgrade, and reap a major performance gain for free.
Published sequence 086, County Highway.
Published fragment Direnv’s source_env
, and how to manage project configuration, on how I accidentally stumbled across the source_env
directive and dramatically improved my configuration methodology overnight.
I pushed a new version of redis-cell today, a project that I still somewhat maintain, but only touch once a year or so.
While looking into another issue that someone had filed, I got the bright idea to update the project’s dependencies. That was a mistake, and I ended up sinking hours into fixing calls to the time
crate. It wasn’t just that a few breaking changes had been introduced – no, the entire API had changed, and every use of any function or type from the create had to be fixed. There was no upgrade guide.
I really want to like Rust, but something like this happens every time I go back to the language. This wasn’t some novel third party dependency that broke. It was time, one of the most core facilities of any programming language, and although the changes that broke me are older now, a cursory look at the project’s changelog shows that it’s regularly deprecating/changing API on recent versions.
Zero cost abstractions are cool, but you know what I like better? Stability.
After coming off the absolute blight on human consciousness that was The Acolyte, I found myself wanting to go back and watch the original Star Wars trilogy.
I was a teen when its “Special Edition” revisions were released, and I remembered that George Lucas had gone on record at the time saying that these were now the definitive versions of the movies. But that was decades ago, and I’d just assumed that the smallest modicum of rationality had won out since then, and HD versions of the theatrical releases had gone out. I mean – the menagerie of Jar Jar-esque CGI critters on Tatooine and Han walking over Jabba’s tail – it’s all so clownish that no one could possibly have stuck to that line. Right?
Wrong. I watched a few minutes of the latest Blu-ray release and it was painful. It’s all in there. Even in the 90s the CGI looked awful. Now, it’s a punchline.
Scrounging the web, I came across Project 4K77 (‘77 is when A new Hope came out), also hosting Project 4K80 and 4K83 for Empire and Jedi, where fans have scanned 35 mm film frame by frame to 4K resolution, and painstakingly cleaned up the whole collection to approach modern standards.
I watched a copy, and it was exactly what I was looking for. Not only is all the Special Edition garbage gone, but it looks considerably better than Lucasfilm’s Blu-ray restoration. It’s grainy, but left that way on purpose to stay true to the original theatrical release.
I’m at the point now that I’m pretending no Star Wars past the original trilogy exists. Who could possibly have guessed not only how badly the prequels would turn out, but that the sequel trilogy would be even worse, and TV follow ups down in the gutter with it.
Oh, and mercifully, Han shoots first.
Published fragment Elden Ring, on how I broke my promise never to give FromSoftware money again, and it was okay.
Golang Weekly notes that Go has jumped to the 7th position on the TIOBE index, which measures programming language popularity.
The rankings are still hard to believe (does anyone actually believe there’s more C++ development happening than JS/TS?), but even so, a positive sign!
I’ve updated The Two-phase Data Load and Render Pattern in Go after Roman pointed out that if we swap the position of two generic parameters in Render
, another generic parameter can be inferred, and every invocation gets considerably cleaner.
Previously, Render
looked like this:
func Render[TLoadBundle any, TRenderable Renderable[TLoadBundle, TModel, TRenderable], TModel any](
ctx context.Context, e db.Executor, baseParams *pbaseparam.BaseParams, model TModel,
) (TRenderable, error)
And was invoked like:
resource, err := apiresource.Render[*apiresourcekind.ProductLoadBundle, *apiresourcekind.Product](
ctx, tx, svc.BaseParams, product
)
In the updated version, the positions of the first two generic parameters are swapped:
func Render[TRenderable Renderable[TLoadBundle, TModel, TRenderable], TLoadBundle any, TModel any](
ctx context.Context, e db.Executor, baseParams *pbaseparam.BaseParams, model TModel,
) (TRenderable, error) {
And the function can now be invoked like this:
resource, err := apiresource.Render[*apiresourcekind.Product](
ctx, tx, svc.BaseParams, product
)
Much cleaner. A caller no longer even needs to know that the load bundle exists. At work I applied the fix to our hundreds of lines of existing calls, and the difference in readability is night and day.
River Python is shipped (with a huge assist from Eric Hauser, who contributed all the original code), enabling insertion of jobs in Python that will be worked in Go. It supports all the normal insert features including unique jobs and batch insertion, along with Python-specific stretch goals like exported type signatures, async I/O, and a @dataclass
-friendly JobArgs
protocol.
Here’s roughly what it looks like in action:
@dataclass
class SortArgs:
strings: list[str]
kind: str = "sort"
def to_json(self) -> str:
return json.dumps({"strings": self.strings})
engine = sqlalchemy.create_engine("postgresql://...")
client = riverqueue.Client(riversqlalchemy.Driver(engine))
insert_res = client.insert(
SortArgs(strings=["whale", "tiger", "bear"]),
)
I’ve been playing around with SQLite the last couple of days. I thought I knew a little about SQLite, but I didn’t, and am getting my remedial education through an accelerated gauntlet. Some of what I’ve learned of its quirks has left me reeling.
Top surprises:
ALTER COLUMN
is not supported. Official recommendation for changing a column? Make a new table.
DROP CONSTRAINT
is not supported. Official recommendation for removing a constraint? Make a new table.
SQLite doesn’t have data types on columns. Data types (and there are only five) are on values only, so anything can go anywhere.
If you ask for a column with an unsupported/non-existent type, it happily does the wrong thing without warning or error. I was raising a schema like CREATE TABLE my_table (id bigserial, messages jsonb[])
, which seemed to be working, so I mistakenly thought for the first day that SQLite supported serials and arrays. It does not.
You can use CREATE TABLE my_table (...) STRICT
to only allow one of the five supported types: integer
, real
, text
, blob
, any
.
There was a lot of recent fanfare about SQLite’s new support for jsonb
. Unlike in Postgres, jsonb
is not actually a data type, but rather a format that’s input and output to and from built-in jsonb*
functions. When persisted, it’s one of the big five: blob
.
Other fairly critical types are also missing. e.g. There’s nothing like timestamptz
. If you want a date/time, you store is as a Unix timestamp integer
or ISO8601-formatted string
, and a number of built-in functions are provided to work with those.
SQLite has some impressive features around streaming which I’m looking forward to playing with, but the initial DX experience has certainly been a little jarring.
On off days, I sometimes wonder if I’m bought into some narratives too strongly. Like, is Postgres really the world’s best database? Experiences like this certainly cement my conviction. Yes, it is.
Updated now, on RBAC and Python.
From Mike Solana, We Are the Media Now, on his publication, Pirate Wires, and on a new media landscape where journalists do journalism, and don’t hate you:
Why are you sharing scoops with journalists who hate you? Do you not understand how this works? I’m realizing it’s possible you don’t understand how this works. New information is the lifeblood of a media company. When you share it with a hostile outlet, you are feeding them. When you withhold it from a value-aligned company, if even inadvertently, you are starving them.
Send us your scoops. Send us your stories. Are you enjoying tech’s vibe shift? It didn’t just happen. Support the team that doesn’t want you liquidated, or don’t complain about the hateful tech press when they’re at your door with pitchforks. Will I still defend you when you’re unjustly maligned?
Linked as supporting evidence is the latest hit piece on Alexander Wang for the unforgivable sin of daring to suggest that meritocracy is a good thing. This remedial second grade level writing was published in TechCrunch, by a person that mistakenly self identifies as a “journalist”:
I would invite him — and those supporting them — to fuck all the way off. You misunderstand me. You thought I wanted you to fuck only partially the way off. Please, read my lips. I was perfectly clear: Off you fuck. All the way. Remove head from ignorant ass, then fuck all the way off.
Yes, that’s really the quality of discourse coming out of legacy tech media. I’m with Mike—don’t give anything to the legacy media. The faster they crash to zero relevance (towards which they’re on a collision course already), the faster they cease to exist.
I ran a query today to delete two rows that clocked in at just under two hours. A new personal record for time for rows deleted – one hour per row.
=> DELETE FROM metric WHERE name = 'networkin' OR name = 'networkout';
DELETE 2
Time: 6957592.469 ms (01:55:57.592)
metric
is a tiny table itself, but it’s referenced by large partitioned tables for metric_point
and metric_point_aggregate
, both of which need to scanned in their entirety to verify that no rows reference either of these two being removed. Luckily, the operation didn’t need to hold a significant lock, and inserts and reads on all tables were healthy throughout.
River now has a self-hosted web UI to help manage jobs and queues without having to drop down to a psql shell.
Paradoxically, it’s mostly written in TypeScript instead of Go, which is more of a testament to the state of Go’s templating system than anything else. It’s still distributed as a single binary thanks to go:embed
.
Published fragment Sqlc 2024: check in, on some quick thoughts on whether sqlc is still the direction for Go projects now that we’ve been using it for three years.
I was amused to discover that Derek Sivers’ Nownownow.com also publishes its directory as a tab-delimited .txt
file. I’ve had pretty good luck randomly finding interesting blogs to add to my reader from Now pages, and the plain text aspect makes it especially easy to search by city or country.
Published fragment Go: Don’t name packages common nouns, on avoiding naming Go packages after common nouns like rate
or server
so that they don’t clash with variable names, and how to find a more fitting name for them instead.
Published sequence 085, BER.
Published sequence 084, spherical abberation.
Published Eradicating N+1s: The Two-phase Data Load and Render Pattern, on using a two-phase data load and render pattern to prevent N+1 queries in a generalized way. Especially useful in Go, but applicable in any language.
Published fragment The romance of Europe, on a concert in Berlin, correcting for tourist bias, and how smartphones own the planet.
Published sequence 082, north of Warschauer.
Published sequence 081, Zschochersche.
A good post from Observable analyzing their HTTP request latency and producing nice visualizations for them. Doing non-realtime analysis frees up one of the axis (normally X is time) which lets the data be plotted in more creative ways, like histograms that show the entire “shape” of the distribution of request latency rather than an approximation of it using common aggregates like P50/P95/P99.
This particular article isn’t instructional on how to repeat this for your own service, but it’s a good idea. I’m going to try and render something like it for our API at some point – we have all the data we need via canonical log lines, so it’s just a matter of wiring up an adapter between data and frontend.
Published fragment ICQ, on the end of the universe, coming June 26th.
Published sequence 080, renewal.
Published fragment Notes on dark mode, which is not a dark mode tutorial, but collects a few notes on some specific refinements of a good dark mode implementation like tri-state instead of bi-state toggle, avoiding page flicker, and responding to theme changes from other tabs or the OS.
I rebuilt this site’s index page so it’s on the newer template system and can take advantage of dark mode. It’s not amazing, but I didn’t want to agonize over it for too long since likely few people will ever go there, so I just threw something together and published it.
From Notes on Japan, I found this last point was quite funny:
visiting Japan feels like visiting the 2000s
CD shops everywhere
malls are thriving
people use fat laptops
You’d have to pry by MacBook from my cold, dead hands, but I dearly miss the greater variety of hardware and form factors that we used to see twenty years ago. Like, I acknowledge that something like the Sony VAIO P (depicted below) probably has ergonomics on par with writing War and Peace via T9 text entry, but I still wish I’d owned one.
Japan seems to be one of the last places left where a market for weird consumer electronics still exists. A few weeks ago I admired a Japanese guy on BART’s incredible e-reader that was fully customized to the use of Japan’s vertical alphabets. Another guy on the 37 Corbett uses what must’ve been the only netbook left in San Francisco.
If you’re at a typical WeWork nowadays, there’s no easy way to tell that you didn’t accidentally walk into an Apple Store instead. The technology is beautiful – perfect geometry, flawless glass, and polished aluminum – but sterile. I’m glad there’s somewhere out there where weird, novel devices are still going strong.
I implemented dark mode for this website, which you should be able to enable using the toggle in the upper right. I figure that if even Google search can do it given what’s sure to be millions of lines worth of legacy code, then I should be able to as well.
I’ll write more about this soon, but by far the hardest part about dark mode is restyling. A site like this one not only has accumulated thousands of pages over the years, but also a dozen major templates of varying quality, each of which needs attention to support such a colossal change. I’ve slowly been moving over to Tailwind since last year and as I did, it gave me time to do some spring cleaning on the template system. Without Tailwind and that cleanup, adding dark mode would’ve been such a big job that I’m not sure I ever would’ve gotten around to it.
This site is a constant WIP and I’m sure I’ll be making some tweaks over the coming months, but if you notice any major usability problems that I missed, open a GitHub issue.
Published sequence 079, spring in Leipzig.
Soutaro (major Ruby typing contributor) has launched a new RBS::Inline project that enables Ruby type annotations in comments instead of a companion RBS file:
# rbs_inline: enabled
class Person
attr_reader :name #:: String
attr_reader :addresses #:: Array[String]
# @rbs name: String
# @rbs addresses: Array[String]
# @rbs returns void
def initialize(name:, addresses:)
@name = name
@addresses = addresses
end
def to_s #:: String
"Person(name = #{name}, addresses = #{addresses.join(", ")})"
end
# @rbs yields (String) -> void
def each_address(&block) #:: void
addresses.each(&block)
end
end
The README notes that if all goes well, this will be merged back upstream into the main RBS gem and become a first class feature. During a recent trial run of RBS I found having to have two tabs open for every Ruby file (the .rb
and its companion .rbs
) to be somewhat painful, and having an alternative is a welcome addition.
Slack is training AI on private customer data.
Years ago I had a conversation with a company that was building a Slack killer. I thought they were crazy. Slack was a beloved product that had built itself the perfect moat. Not through some exotic feature set that nobody else had, but by being feature complete, and a little better and a little more refined than any of its competitors.
Compared to HipChat (what we’d been using at the time), Slack was lightning fast, had a thoughtful UI that someone had obviously taken the time to get right, and prone to far fewer bugs and outages. It was hard to describe in few words why one chat app was so much better than the other chat apps, but anyone who used Slack for five minutes understood immediately.
A lot of ground’s been lost since then. Slack’s recent UI redesign is a textbook case of ignoring usability in favor of minor cosmetic benefit – a fundamental degradation in UX to save twenty pixels on the left side of the screen. When users expressed unhappiness, Slack doubled down. The alternative – a roll back and tarnishing of the reputation of some random, unqualified design VP at Salesforce – was obviously intolerable. Previously rock solid reliability has gotten steadily worse with long load times and frequent client desynchronization.
Telling paid users that their private communications is now property of Slack for AI training is more of the same. (But don’t worry, you can opt out by emailing support! They could’ve by all rights required that a paper TPS form be submitted by mail.)
If there’ll be one future lesson from Slack, it’ll be that it’s not only possible to lose your lead from competitors advancing, but by regressing yourself.
Thought provoking post from DDH on the broad failure of system tests, defined in this context as web UI tests, driven by a headless browser.
A good way to test UIs is a problem that people have been trying to solve since the moment I stepped out of university and into a software engineering job. Back then, despite the evasiveness of a good answer to date, I assumed that someone would eventually figure it out. Now, almost twenty years later and an even halfway good answer still elusive, I’m not so optimistic.
Our latest round of test strategy uses Playwright, which describes itself as “reliable end-to-end testing for modern web apps”. I haven’t found it particularly so:
git clone && make test
as you can get.All in all – and I’m trying my best to make sure that I’m not exaggerating – the false positive rate on failures is something like 99%. I actually don’t recall ever seeing a true positive in the sense that a test case caught something I broke by accident.
DHH’s prescription seems extreme at first glance:
HEY today has some 300-odd system tests. We’re going through a grand review to cut that number way down. The sunk cost fallacy has kept us running this brittle, cumbersome suite for too long.
But for a test suite that slows development and only prevents a regression once in a blue moon, isn’t it the only rational answer?
Published fragment Use of Go’s cmp.Or
for multi-field sorting, on a more elegant way to sort on multiple fields using Go 1.22’s cmp.Or
helper.
I recommend Return YouTube Dislike, a browser extension that brings back the dislike count on YouTube videos, and run as an open source project on GitHub. I had a vague idea that an extension like this might exist, but didn’t do anything about it until recently, when I saw a screenshot from somoene else who still had theirs intact.
It shows how much work Google has put into protecting the message from our heroic defenders-of-big-D-Democracy coastal wine-and-cheese chattering class, who naturally are the self-elected determiners of all that is tasteful and good. Best exemplified by Hollywood, the trailer for the new Rings of Power season that dropped yesterday is sitting at a highly skewed 👍71k/👎340k. The trailer for Disney’s upcoming Star Wars series The Acolyte is at 👍70k/👎159k. Of a half dozen of CNN’s most recent videos, the majority of them have more dislikes than likes. Merely knowing how widely this content is disliked is dangerous information that must be suppressed, and Google is more than happy to comply.
Published fragment ValOrDefault, on a pair of helper functions ValOrDefault
and ValOrDefaultFunc
that can help significantly to clean up Go code around assigning default values.
And actually, an update: Go 1.22’s cmp.Or
supersedes this custom helper.
Published fragment Heroku on two standard dynos?, on contemplating whether the Heroku platform would fit on two standard 512 MB dynos if it could be ported from Ruby to Go.
Published fragment Activating cached feature flags instantly with notify triggers, on reflecting changes made to feature flags immediately, despite a local in-process cache, by firing sync notifications from triggers, and listening with the notifier pattern.
Published sequence 078, lights over Friedrichshain.
Published fragment Shiki, on at long last, adding syntax highlighting for code blocks to this website, and what I like about Shiki, a syntax highlighter that uses on the same engine as VSCode.
Published fragment Plumbing fully typed feature flags from API to UI via OpenAPI, on a pipeline that moves feature flags defined in a backend YAML file through to OpenAPI, then onto generated TypeScript that uses types to see which flags are available and check their state of enablement.
We published a post to the River blog: Uniqueness with Postgres advisory locks and FNV hashing. It covers how River guarantees job uniqueness using a combination of transactions, advisory locks, and the FNV hashing algorithm to build a string representation of unique properties, then hash it into the 64-bit advisory lock space.
Published sequence 077, cubes inside cubes.
Published fragment Digital detox, on my longest period without a smartphone in a long time, and how to have a better relationship with it in the future.
Published The Notifier Pattern for Applications That Use Postgres, on maximizing Postgres connection econonmy by using a single connection per program to receive and distribute all listen/notify notifications.
Published sequence 076, Five Elephants.
Updated now, and published the same photo as sequence 075.
Published Web APIs: Enriched DX By Disallowing Unknown Fields, on using Go’s DisallowUnknownFields
option to improve an API’s integration experience by making parameter naming mistakes faster to resolve.
Published sequence 074, light roast.
Want a great life hack? Simple: 5 AM run.
As absolutely awful as it is to drag yourself out of bed and across the threshold of your door, from there every aspect of the day gets better. For an hour you enjoy the world at its most peaceful, with the natural world at its most active, and reduced human traffic of all kinds. Afterwards, your mind is tranquil, focused, and clear, and you’ve still got hours to do something creative before most people will even start their day. Even the light burn in your legs feels good, reminding you that you’ve filled the quota you owe your body.
Recently, I’ve been failing to get out for one more often than not. Today, I made it, from Friedrichshain across the Spree to Treptower Park, and doing the perimiter, starting along the water. Sunrise is at 5:30 AM this time of year, but it’s light already by the time I leave. Every time I do one of these, all I can do is wonder why the hell I don’t every day.
Published sequence 073, glass and steel.
Published fragment Histograms worked, following up on the use of histograms to generate metric aggregates that can be used to generate charts over much wider time ranges like a week or a month.
Published fragment Ruby typing 2024: RBS, Steep, RBS Collections, subjective feelings, on diving into the RBS ecosystem as an alterative to Sorbet for Ruby typing.
Everybody loves dolphins.
We do one shore day so we’re not diving within 24 hours of a flight. Our hostess told us they’d take us out on a dolphin watching expedition. We accepted, not having any idea whether to expect any dolphins to show.
Our divemaster Tuks motors us for a half hour along the shore, the boat pelted with rain, but the storm causing a surprising stillness on the water, then out to sea, passed a shallow coral reef, and into an area where dolphins are found. Tuks spots a pod in the distance.
There’s nothing else on Earth that’ll inspire more child-like wonder in an adult than dolphins. The moment a sighting is announced every person on the boat rushes to the front of the boat, clings to a rail, and stays fixated there for the next thirty minutes. Everybody loves dolphins.
They play ball by swimming under our boat’s bow in eassy camerashot for quite some time before breaking off and performing aerial acrobatics at a distance. One juvenile in particular does a series of forward vertical somersaults that don’t seem like they should be physically possible. This species is appropriately named “spinner dolphin”.
Engineers and commit/bullshit ratio:
You also intuitively know what bullshit is. It’s delays, bad taste, fighting a lot, being dogmatic, complaining, broken code, laziness, cynicism, activism, pedantry, entitlement. Bullshit is everything that makes your coworkers’ life more of a pain than it needs to be.
Everybody is allowed a little bullshit because if you only allow zero bullshit you can never work with anyone at all. But bullshit must be paid for with commits. The more bullshit you generate, the more commits you need to push. It’s not an exact science, but it doesn’t need to be. Everyone already knows. Think of a coworker and ask yourself– what is their commit to bullshit ratio? The answer probably leaps to mind. Maybe the answer is “unusually high”. Or maybe it’s “neutral at best”. Whatever it is, you already know.
This heuristic rings true. For junior engineers, “bullshit” is traditional poor practice in the space: bad code, lazy test coverage, failure to follow convention, or novel new patterns that complicate unnecessarily. It’s inexperience at work. Many will graduate out of it.
For seniors, bullshit is more subtle: days of bikeshedding over the fine points on tech decisions that vary the end result by ±5%, fighting about language and toolchains, continually relitigating architecture, and judging other peoples’ code on the merits of being “correct” or “elegant” or “minimal” according to an idyllic standard that doesn’t exist.
Everybody does some of it, but the key is ratio. Good engineers do little while pushing features most of the time. Less good engineers engage in a lot of “bullshit”, wasting their own time, and that of others’ who are pulled into reviews and battles.
Ironically, many of the most internet famous engineers I’ve worked with fit squarely in the latter camp, while others so invisible that they barely have a LinkedIn profile, land in the former.
Vernor Vinge passed away a few weeks ago. I would rate few scifi books as truly great, but the ideas presented in Vinge’s Marooned in Realtime and the concept of a bobble absolutely blew my mind, and they’re not even his most influential work.
Below, a photo and a screenshot of his hacker workstation from a 2009 interview. Linux, Emacs, and Org Mode (or something close to it). How can you not love it.
Added a benchmarks page for River. After optimizing job completion so that it’s done in batches, it works about 46k jobs/sec on my M2 MacBook Air.
Published fragment Modals and mysterious macOS failures, on bequeathing cron scripts permission to run.
Published fragment Stubbing degenerate network conditions in Go with DialFunc
and net.Conn
, on using DialFunc
to return a minimal stub for net.Conn
that can simulate hard-to-reproduce conditions like an error on Close
.
type connStub struct {
net.Conn
closeFunc func() error
}
func newConnStub(conn net.Conn) *connStub {
return &connStub{
Conn: conn,
closeFunc: conn.Close,
}
}
func (c *connStub) Close() error {
return c.closeFunc()
}
River’s received some recent fixes with the aim of stabilizing it for production readiness:
riverpgxv5
’s Listener
where it wouldn’t unset an internal connection if Close
returned an error, making the listener not reusable.We’ve ran some open ended tests with continuous job insertion and work over long periods. Its memory use is stable, so we think those were the only leaks in there (at least for now).
Published fragment Prepared statements for psql operations, on using prepared statements with operational queries to make it easy to replace parameters and save time.
PREPARE add_flag_to_team(text, uuid) AS INSERT INTO flag_team (
flag_id,
team_id
) VALUES (
(SELECT id FROM flag WHERE name = $1),
$2
) ON CONFLICT DO NOTHING;
EXECUTE add_flag_to_team('<flag_name>', '<team_id>');
As I was untangling my Amazon/AWS credentials last night, I did something that I don’t do often, and looked at the details of my AWS bill.
The total cost of hosting this site for January: $3.08. That doesn’t seem like a bad deal, but digging in a little, it turned out I was overpaying.
Of $3.08, $3.07 was for S3 (I was mildly surprised to see that all my CloudFront use fits in the free tier). And of that, $2.15 was for PUT
, COPY
, POST
, or LIST
requests on this site’s bucket. The GET
s used to actually serve the site are cheaper, and added up to only $0.39.
The S3 lists and mutations are generated from the build process, which from a GitHub Action syncs the built product with S3. The majority of builds are automated on cron – some parts of the site like reading or twitter ingested data from the Goodreads and Twitter APIs, so I’d had the site building every three hours to pick up changes.
But over the last year, both those APIs have experienced unceremonious deaths, reducing the dynamic content on this site to zero. All relevant changes are now pushed through Git, leaving the cron schedule a vestige of better times.
I reduced the cron frequency from three hours to three days (still a good idea to check periodically that the build still works), which should have the effect of bringing that $2.15 down by an order of magnitude.
Saving $2 this way was certainly not worth the time (that’s about 1/3rd of a single San Francisco coffee these days), but hey, it’s fun.
A Go 1.21 feature that I’d previously missed: toolchains. A toolchain consists of a bundled compiler, assembler, and set of standard Go tooling. An installed go
command has a bundled toolchain, but is capable of fetching other ones as necessary.
Today, to upgrade my project to Go 1.22, all I had to do was change one line in go.mod
:
$ git diff
diff --git a/go.mod b/go.mod
index 49a960839..f1b3ff857 100644
--- a/go.mod
+++ b/go.mod
@@ -11,7 +11,7 @@
module github.com/crunchydata/priv-all-platform
-go 1.21
+go 1.22.0
The next Go command I ran detected the absence of Go 1.22, and downloaded it, producing the most streamlined upgrade path of all time.
$ go version
go version go1.22.0 darwin/arm64
Published fragment Thoughts on ONCE + Campire, on ONCE’s $299 self-hosted Basecamp. Is a web app for chat fine nowadays?
A tweet:
typing is the secret to using the computer. if you’re not typing, you should be clicking on stuff. if you’re scrolling, then it’s already over… you’re not doing shit
A little brash on the surface, but a font of wisdom below. If you’re on a computer scrolling, nothing useful is happening. Best to stop using said computer and go do something better.
Published fragment Hard media, on the disapperance of physical media and the overabundance of its digital counterpart.
2023 was an odd fitness year for me, simultaneously being one of victory and of defeat. I did some of my best running distance ever, finishing just over 1,700 km, but still ended the year heavy. A clear indicator that nutrition is at least as important as exercise.
I started a daily running streak going into France, and hit 163 consecutive days before it ended with my trip to the John Muir Trail. This was a great habit – wanting to keep the streak going, I’d be out there every day rain or shine, even when I hated it. There was a moment where I arrived in Berlin late in the evening, exhausted from three days of long distance travel, but still, like every other day, dragged myself to the door, put on my running shoes, and ran around in the dark until I hit my 5 km minimum. Running is such a great way to see Europe. By the end of my stay in any place – Paris, London, Bath, York, Berlin – I’d have multiple routes figured out and strong opinions on which were the best. Europe’s a crowded place, so I’d wake up as early as I could, often starting before the sun was out. Getting my run done early also improved the chances that it’d get done at all.
Takeaways for 2024: Habit streaks work, early is better than late, and without nutrition, fitness is only half of the whole.
Published sequence 072, Prairie ridgeline.
SQLite’s Wal2 mode was new to me. Added for high throughput systems to resolve a problem where if a wal file was being continuously written to with new changes, SQLite’s checkpointer could never fully finish its work, and therefore the wal file could grow unbounded.
Wal2 fixes the problem by juggling two wal files which its writer and checkpointer alternate between:
In wal2 mode, the system uses two wal files instead of one. The files are named “[database]-wal” and “[database]-wal2”, where “[database]” is of course the name of the database file. When data is written to the database, the writer begins by appending the new data to the first wal file. Once the first wal file has grown large enough, writers switch to appending data to the second wal file. At this point the first wal file can be checkpointed (after which it can be overwritten). Then, once the second wal file has grown large enough and the first wal file has been checkpointed, writers switch back to the first wal file. And so on.
Published fragment Discovering histograms, kicking off metric rollups, on a blog post explaining aggregating aggregates doesn’t work, and using histograms to do the job instead. It’s a modest effort, but it’s something to kick off writing for 2024.
Published sequence 071, still water.
Goodhart’s law:
When a measure becomes a target, it ceases to be a good measure.
Published sequence 070, bulbs at 101.
The Christmas tree this year. Mom and Dad have been using a real tree for four decades, but with the help of an attractive sale from Costco, this is finally the year they switched to artificial. There’s nothing quite like the smell of an honest-to-God tree on Christmas morning, but these days the fake ones are gorgeous, and win hands down practicality wise with no cleanup and built-in lights.
Follow up from yesterday: dumped Goodreads immediately after realizing out that it was not only returning missing information for newly read books, but was tainting my existing archives as records of previously read books started returning corrupted records as well.
My book list is now a flat file in TOML instead. More robust, but it’s a shame how one by one, every third party API this site used to ingest for aggregation has disappeared – Goodreads was the last one standing, and it’s gone.
Goodreads has been in the process of deprecating their API for years now. As of December 2020 they stopped issuing new API keys, but let existing ones keep working.
It’s not documented, but even with an existing API key, somewhere around mid-2022 they crippled the API so that it no longer returns many properties about books being returned – e.g. publication year or number of pages. It’s still possible to extract reviews, but that’s about it.
I recently restyled this site’s /reading section which had still been syncing from Goodreads (although I’m about a year behind on reviews). The API still technically works, but given the certainty of deprecation and truncated functionality, it seems about as good of a time as any to move back to a flat file.
Today, I accidentally stumbled across .ovh
domains, offered by OVH Cloud, a French VPS provider, which may be the cheapest TLDs available – $2.15/year for initial registrations, and then $3.49/year for renewals.
I’m a strong purveyor of the “make it first, register domain second” mantra since spending money on domains is easy while doing is hard, but with that caveat, .ovh
may be a good a spot for long-term side projects.
.tk
domains are technically cheaper because they’re free (something which I’d somehow never realized, despite having seem them in the wild for 2+ decades now), but have been unavailable for registration since March, after Meta sued them for (surprise!) making up a massively disproportionate share of domains used for phishing and other malice (technically, the lawsuit is for cybersquatting and trademark infringement).
I enjoyed The “Cheap” Web (potato.cheap), a self-proclaimed solarpunk philosophy of web design.
An extract:
Large parasocial platforms transformed the internet into a hostile and impersonal place. They feed our FOMO to keep us clicking. They exaggerate our differences for “engagement”. They create engines for stardom to keep us creeping. They bait us into nutritionless and sensationalist content. Humanity cannot subsist on hype alone.
Small and sincere communication quietly thrives. It’s easy to find and even easier to make yourself:
Write on the internet. Find or create a third place. Pick up the phone. Join niche interest groups. Live, don’t lurk. Embrace candid culture. Put people you care about on the calendar. Don’t play near black holes. Meet people at farmers markets. Learn to communicate. Make wobbly things. Subscribe to local events calendars. Learn to win friends. Learn to feel. Email strangers. And so on.
We’re at a difficult impasse right now. A smaller, more independent web is a fundamental healthier model for both sustainability and for our minds, but every emergent force of the internet acts in opposition to it. Platforms tend toward consolidation and monopolization, and our animal minds are susceptible to clickbait and negative emotion. Counteracting that won’t be free. We’ll need a new approach for building and interacting online that takes conscious effort and discipline.
Golangci-lint 1.55 bundles in testifylint
, which must be the best new linter in years, aiming to make calls to the testify
package more consistent.
Its best aspect is being able to detect reversed parameters. i.e. require.Equal(t, actual, expected)
when it should be require.Equal(t, expected, actual)
. An example fix from River:
- require.Equal(t, cleaner.Config.RescueAfter, RescueAfterDefault)
+ require.Equal(t, RescueAfterDefault, cleaner.Config.RescueAfter)
But there’s many other good ones. Requiring Len
instead of using an equality assertion:
- require.Len(t, job.Errors, 0)
+ require.Empty(t, job.Errors)
Or use of ErrorAs
:
err1 := &UnknownJobKindError{Kind: "MyJobArgs"}
var err2 *UnknownJobKindError
- require.True(t, errors.As(err1, &err2))
+ require.ErrorAs(t, err1, &err2)
Published sequence 069, limited visibility.
Published sequence 068, orbs.
Never offer a newsletter sign up without linking to at least one existing issue of said newsletter.
Sam Altman ousted from OpenAI and Brockman demoted.
I have nothing intelligent to say about the situation, so I’ll leave it at this: isn’t this just about the most astonishing thing that could have possibly have happened in tech right now?
The highest profile person at the most important company in the most important new technology space, a man presumed king, is fired.
Nanoglyph 041 is published, on Postgres 16, iPhone 15, charging batteries to 80%, and APEC.
Published sequence 065, protesting the CCP, taken during APEC (Asia-Pacific Economic Cooperation) in San Francisco. Xi is staying in the presidential suite about a block away.
Published sequence 064, Paris cool, at Le Grand Bassin Rond.
WeWork filed for bankruptcy last week. With its stock price down 99.5% since peak and whispers rampant, it’d been expected for a while.
I was wondering if it’d change anything operationally. I went in the next day and was seated close enough to the space’s office managers to overhear most conversations as people made their way in throughout the day. There might’ve been one person who brought up the bankruptcy filing. Entirely business as usual.
Despite the bad news, every writeup on the subject has had an amazingly optimistic outlook for the company. This excerpt from The Industry is representative:
First of all, this is America, which means the rich go out different (a good thing btw, but we’ll save it for another day). “Bankrupt” doesn’t mean “gone,” exactly, but there is a hierarchy in terms of who gets screwed the most. The likeliest case is equity shareholders and stockholders get nothing, and debt holders get whatever is left. Someone could buy the company, shed the worst leases, renegiotiate the rest, and operate a leaner WeWork that benefits from stunning spaces built out using ZIRP-era VC dollars. So your nearest WeWork may shut down, but a lot of them, in some form or another, will likely stick around.
Assuming they can get that debt off the balance sheet and their most egregious leases from peak CRE renegotiated, I’d be amazed if WeWork couldn’t recover. It’s a great product with good revenue, and aside from those leases (which would be priced much lower in 2023), it really shouldn’t cost that much to run.
The Ruby on Rails documentary has been published. I watched it this morning. It’s just about the right length (45 minutes), and centers around interviews with DHH, Jason Fried, and Tobias Lütke.
Today, after years of VSCode, I finally found the options to disable its default behavior of automatically adding closing brackets and quotes (i.e. if you type {
, it’ll add }
, or "
to get a second "
).
They’re in Preferences (⌘,
) under Auto Closing Brackets
and Auto Closing Quotes
.
I used the default for a long time because I figured that I must be missing something – most of VSCode’s defaults are pretty good, so somebody must find this behavior useful. If I give it a chance, that usefulness will eventually pop out at me.
But after a long trial period, I’m killing it with prejudice. For the life of me I can’t remember one situation ever where an automatic bracket/quote saved me anything beyond the most infinitesimal amount of time, but on the other hand, especially when refactoring, they waste time by doing the wrong thing in 90%+ of cases. I’ll be changing things around within an already existing line, and as I’m adding a bracket/quote, VSCode puts in a closing version even though one already exists. I actually developed muscle memory to tab right and delete the new character.
A similar story for automatically added tags in HTML templates. When writing brand new templates it’s pretty handy to have the closing tags inserted for you, but I find that I’m editing existing templates the majority of the time, and once again, VSCode is constantly inserting closing tags where one already exists.
A rare miss for VSCode. Maybe I just don’t get it.
An unfortunate discovery in Go today: when using go test
and running a testable example, the -count
flag is silently ignored – a confusing, underdocumented sharp edge for users. That is unless you specified the magic number -count=1
, in which case you get the normal test cache busting behavior.
You can find a caveat in help once you suspect the existence of this problem, but the only way to notice it’s happening is that your tests are running suspiciously fast at high -count
iterations. No error or warning is produced.
$ go help testflag
...
-count n
Run each test, benchmark, and fuzz seed n times (default 1).
If -cpu is set, run n times for each GOMAXPROCS value.
Examples are always run once. -count does not apply to
fuzz tests matched by -fuzz.
This makes it hard to reproduce intermittent problems in an example test. As a workaround, I’m using a shell loop that runs the test until it returns a non-zero exit code:
while go test . -run Example_customInsertOpts -test.v -count=1; do :; done
I was spelunking some Go internal code today, and came across the definition of testing.TB
. See private()
at the end:
// TB is the interface common to T, B, and F.
type TB interface {
Cleanup(func())
Error(args ...any)
Errorf(format string, args ...any)
Fail()
FailNow()
Failed() bool
Fatal(args ...any)
Fatalf(format string, args ...any)
Helper()
Log(args ...any)
Logf(format string, args ...any)
Name() string
Setenv(key, value string)
Skip(args ...any)
SkipNow()
Skipf(format string, args ...any)
Skipped() bool
TempDir() string
// A private method to prevent users implementing the
// interface and so future additions to it will not
// violate Go 1 compatibility.
private()
}
The interfaces mixes in a private function to make it impossible for external packages to implement it. This leaves the authors free to add new functions to the interface without worrying about breaking existing user code.
Personally, I hadn’t seen this technique in action before. It’s probably something that most Go APIs could take advantage of.
A new version of goleak (1.3.0) includes IgnoreAnyFunction
function which allows a specific function to be ignored at any level of a goroutine’s stack. Previously, only a function at the top layer could be ignored, which forced us to ignore extremely generic functions like time.Sleep
.
For example, this ignore, which was ignoring an internal function deep in the standard library (internal/poll.runtime_pollWait
):
goleak.IgnoreTopFunction("internal/poll.runtime_pollWait")
Becomes this, which property ignores the specific offending goroutine in Pgx:
goleak.IgnoreAnyFunction("github.com/jackc/pgx/v5/pgconn/internal/bgreader.(*BGReader).bgRead")
I’ve used youtube-dl
on and off over the years, mostly for cases where I want a video off YouTube for offline/long term viewing.
Recently, Google’s started flitted with warnings that adblockers aren’t allowed on YouTube. It’s a soft paywall for now that’s dismissable, but the next step could be a full ban.
I’ve been playing with a workflow involving yt-dlp
(a fork of youtube-dl
with more active development). When linked to YouTube, copy the link instead of following it, jump to terminal and paste it with yt-dlp <link>
, then open
the file with Movist Pro (a great all-purpose video player for macOS).
There’s definitely a little friction, but it comes with benefits (a much better video playback experience including keyboard shortcuts and being able to pin the window on top, no buffering), and is only nominally worse than dismissing popups.
Published sequence 063, Paris chic.
Published sequence 062, Camp Reynolds.
A very in-depth comparison of Ruby HTTP clients in 2023. I try to avoid dependencies where possible, but the API of Ruby’s built-in HTTP client is so absolutely brutal that I’ve always defaulted to using Excon.
The author finds that HTTPX is the best choice, which you have to take with a grain of salt since the article’s author also wrote HTTPX (although the comparison is fair by my read).
HTTPX’s only dependency is http-2-next
, with no C extensions or other shenanigans. It’s feature rich, and will use powerful HTTP constructs by default. See this example where multiple requests are made concurrently automatically, and those targeting the same host will multiplex over the same socket with HTTP/2:
HTTPX.get(
"https://news.ycombinator.com/news",
"https://news.ycombinator.com/news?p=2",
"https://google.com/q=me"
) # first two requests will be multiplexed on the same socket.
The corner cases it can handle are genuinely impressive. For example, here’s how to target a request to an IP address over HTTPS and pass a specific hostname to verify using TLS SNI, something that almost no one will ever have to do, but great to have for anyone who finds themselves in that position:
HTTPX.get("https://172.45.65.131:5647/",
ssl: {hostname: "proxy-ssl"},
headers: {"host": "subapp.com:5647"})
For Go 1.22: enhancements to Go’s HTTP routing capabilities. Namely, routes will be lockable to specific HTTP verbs (GET
, POST
, …) and will be able to take named path parameters (/item/{user}
), crucial features that’ve been desperately needed in Go for 14 years (Go announced November 2009), and whose absence has created an absurd plethora of third party libraries to fill the vaccuum.
Example usage for ServeMux.Handle
and ServeMux.HandleFunc
:
/item/
POST /item/{user}
/item/{user}
/item/{user}/{id}
/item/{$}
POST alt.com/item/{user}
I was looking through our code and I think that with this available we’ll be able to tear out Mux, although a nicer way to do middleware and subrouters in net/http
would be appreciated.
An exciting PGBouncer release: “The one with prepared statements”.
Prepared statements have always been a problem for PGBouncer because they’re prepared at the level of a session rather than shared in the database across connections. PGBouncer attains its multiplexing capabilities by mapping “virtual” sessions to “real” Postgres sessions, so there wouldn’t be any guarantee after a statement had been prepared that a subsequent attempt to use it would go the original session.
The new version of PGBouncer solves the problem via internal bookkeeping where PGBouncer itself becomes prepared statement aware. If an incoming use of a prepared statement would go to an underlying connection where it hasn’t been prepared yet, that gets taken care of first.
The release notes claim a performance improvement between 15% to 250%. I’d guess that it’d be closer to 15% for most real world apps, but if you’re using PGBouncer, why not. It’s still 15%.
We’ll have it ready on Bridge soon. (Update: now available.)
Published sequence 061, Pagan.
Some people are writing some legitimately great content in longform Twitter/X posts. See this one for example from Abhinav Upadhyay on the difference between tail -f
and tail -F
(tail -F
watches for the file being rotated and reopens it if it is). Easily blog post quality.
I love it, but it does make me wonder what we’re doing here. A year from now this content will have been disappeared into the closed tangle of throwaway detritus that is Twitter, functionally no longer existing for the purposes of indexing. On the other hand, I understand the motivation to post on Twitter, where reach and interaction will be orders of magnitude greater.
This is an old man shaking fist at cloud moment for me, but it’s unfortunate that we’ve been yet unable to discover a better compromise.
Apple’s being sued in California over claims that AirTags enable stalking.
This is a tough one. By designing a very functional and affordable product, Apple’s created the world’s best tracking device. I’m reminded of old spy movies where a hero or villain attaches a little beeping box to the undercarriage of their opponent’s vehicle. These days you might just use an AirTag, with orders of magnitude improvements in range, size, and battery than what any of those old world trackers wouldn’t had, and $30 instead of the $10,000 or whatever Raytheon would’ve charged.
It was inevitable that some of this tracking ends up being malicious, but can’t any product be misused for harm? And can that really be said to be the manufacturer’s responsibility?
Apple does seem to have taken significant good faith mitigation to curb misuse. Apple users are notified when an AirTag is traveling with them (I’ve been getting these regularly when walking around with other AirPods users), an app is available on Android to do the same, and the AirTag can be made to produce an audible tone to help identify its location. When setting one up, the owner is warned against misusing it, and is informed that Apple will happily provide information to law enforcement on request. Pre-AirTag trackers would’ve had none of these safety features.
Nanoglyph 040 is published, on Rails World, Rails 7.1, a DSL for CTEs (common table expressions), and Amsterdam.
Just look at the location of that Crunchy booth! Front row and center.
Published fragment The peanut gallery, on the use of words like “simple”, “elegant”, and “elegant” in the sense of software.
Published fragment Optimizing row storage in 8-byte chunks, on optimizing field order in a row to avoid padding (for when tables are expected to be really big) and a useful query to help.
Unboxed an iPhone 15 Pro Max this morning. We’re well past the point of dark comedy in how little these things change year-over-year. The 15 is indistinguishable from the 14, and you have to be an Apple connoisseur to spot the differences even from a 12 or 13. You can easily tell it apart from an 11, but only because that was the year that Apple made the inspirational jump from curved edges to square edges (a reboot of iPhone 4 era design). Hash tag courage.
That said, as with previous iterations, it’s a great phone. Impressions from the first day of use:
USB-C, finally. I now have exclusively USB-C gear, so I can travel with only one type of cable.
The Action button is great. Instead of activating on a simple press (prone to false positives), you hold it for a moment, and it provides the perfect amount of haptic feedback. I have mine mapped to open the camera, so it’s now much faster to do so (the alternative was finding a widget from the lock screen) and I can do it by touch only. The old “silent” switch it replaced was so freaking useless – it was a crime to leave it on there for as long as they did.
Titanium feels great and is more grippy than the previous body. The phone is easy to hold onto even without a case.
Some people have reported heating problems. Mine definitely got hot as it was restoring from backup, but it’s been fine since.
Stranded in Amsterdam.
Yesterday, I was supposed to fly AMS to SFO returning from Rails World this week. I’ve been having pretty good luck with airlines recently, and didn’t think much of it. At the gate things seems okay, except boarding is delayed a few minutes, ostensibly because the pilot is late, which in restrospect, should’ve been a red flag. Once we’re all boarded, we’re informed we’ll be delayed thanks a technical issue. They’re working on it and will update us at 3:30p. At 3:30p we’re informed we’ll get an update at 4p. At 4p, it’ll be at 4:30p. At 4:30p, it’ll be a 5p.
We know the writing on the wall is a cancellation, and it’s a dreadful thought. This is a major leg that flies a 777. If it’s cancelled, that’s almost 400 people who need alternate routes. Is that even possible?
Sure enough, it’s cancelled. May god have mercy on our souls.
But this is where we get some balance to our bad luck. We’re informed that we’ve been automatically rebooked to a new flight the next day and have been given a room at the local Steigenberger near the airport, where we’ll also be fed. The hotel’s airport shuttle is crowded, but otherwise we exit the airport and get over there without much fanfare. This must be EU regulation working in our favor. We’re flying f*ing United, so you know they would’ve cancelled the flight and left us on the tarmack to rot if they thought they could legally get away with it.
In a happy coincidence, Craig and I were accidentally going back on the same flight (we should’ve coordinated but didn’t), and spend the evening exploring the local hotel airport bars.
In every North American city, the area for ten miles in every direction around an airport is an exclusion zone of highways and parking lots, as hostile to human existence as the surface of Mars. Relatively speaking the area around Amsterdam’s airport is a little like this too, but only relative to the rest of Amsterdam, which is a supremely walk and bike friendly. The area around us has a lot of concrete, but also a certain amount of charm, and like the rest of the city, bike lanes in every direction. In the morning I go for a run across a short bridge and over to a sprawling park on the river called Het Amsterdamse Bos, which honestly, is nicer even than anything in central Amsterdam.
Now, sitting on a new 777, hoping our luck is better this time.
Nanoglyph 039 is published, on the John Muir Trail, Charleston, and annotating an entire test suite with t.Parallel
.
Published fragment Being a good web denizen: Don’t strip EXIF metadata from photos, on taking it easy with the -strip
on ImageMagick commands so that EXIF tags aren’t blown away for no good reason.
Published fragment A Postgres-friendly time comparison assertion for Go, on a time assertion for Go that ignores time.Time
’s monotonic component and stops at microsecond-level precision.
Go’s a great language, but every honest user would have to acknowledge that mistakes were made.
Along with time
’s magic date of Mon Jan 2, 2006 and the database/sql
package, some of the biggest are html/template
and text/template
. Runtime-checked only, nesting logic that defies explanation, necessitating custom helpers to do perform operations so basic as adding two numbers together, and some of the worst docs ever written.
I was glad to discover that someone’s trying to write an alternative: Templ. It produces a hierarchy of nested partial views similar to React apps, and generates Go code in a way reminiscent of sqlc.
I tried it on a toy project and got to something workable quickly. Templ’s fairly new and has rough edges:
No line numbers in case of problems, making debugging difficult.
No equivalent to Gofmt for *.templ
files, so you’re back micromanaging whitespace and manually fixing imports.
Lots of regenerating code on changes. There’s a subcommand that’ll watch the disk, regenerate, and restart a command, but your success in integrating with this will vary wildly.
Still, a lot to be optimistic about. Go’s viability for writing an HTML-based web app that’s sustainable is still very low, and it’d be great to see that change.
Published a photoblog of spending a little over a week on the John Muir Trail in the Sierra Nevada. We ended up exiting earlier than planned, which was disappointing at the time, but we went back a couple weeks later to finish off the last segment.
I recently switched this site back over to use a custom built ImageMagick binary. I had to move off of it a while back after the binary I built started producing this perplexing error:
$ /home/runner/imagemagick/bin/magick identify -list format
/home/runner/imagemagick/bin/magick: symbol lookup error: /home/runner/imagemagick/bin/magick: undefined symbol: heif_deinit
I’d prefer to be using ImageMagick’s mainline release of course, but for reasons I don’t completely understand, there’s a broad refusal to include HEIC support out of the box, thus the custom build.
The error above is impressively bad, but through brute force trial and error, I was eventually able to track it down to the use of an outdated libheif-dev
. libheif-dev
’s a package that wasn’t traditionally included in default APT sources, so I’d been getting it from the third party strukturag
repository. Removing the third party repository sourced a newer version of libheif-dev
and fixed the problem.
What’s a little mystifying is how hard all of this is, still. A HEIC-friendly ImageMagick is hard to come by, and googling these problems uncovers little. But HEIC’s a common format that’s been in use for years on about half the world’s smartphones, and by extension, about half the world’s cameras. You’d think that by now that there’d be more demand for working with this extremely common image format.
My off the cusp hypothesis is that the fact that there isn’t goes to show just how few people are building anything on their own these days. All the popular social and photography platforms handle HEIC transparently for you to the point where probably 99.9+% of its users have no idea they’re even using it. Probably inevitable, but not a hopeful sign for a less consolidated future.
I’ve previously written about io_uring
, possibly one of the more exciting kernel developments in recent memory, aiming to improve the performance of fundamental I/O operations.
But it’s not without major fault. A post from Google’s security describes how in the year leading up to June 2023 60% of reported vulnerabilities were in io_uring
, and Google’s paid out $1 million USD for io_uring
exploits.
Due to high risk of compromise, Google’s turned it off in their properties:
ChromeOS: Disabled, with exploration into sandboxing it.
Android: Unreachable from apps. Future releases will lean on SELinux to limit io_uring
access to select system processes.
Disabled on Google production servers.
Published fragment Getting Postgres logs in a GitHub Action, on how to get extra Postgres logging detail with three lines of YAML. This trick saved me as I was debugging a deadlock problem that was only reproducible in CI.
Farina and I successfully summitted Mt Whitney last week. It’s the highest peak in the lower 48, so although it felt great getting to the top, the air up there was sure thin.
Getting up and down in one day is a slog involving early morning headlamps, so we turned it into a more leisurely backpacking trip starting out of Horseshoe Meadow, a day to Rock Creek, a day to Guitar Lake, and then a day up and down Whitney and out the normal way to Whitney Portal. Guitar Lake’s at 11,600 ft, so it was only another ~3k feet ascent to the top.
A more thorough writeup to come!
I resent the fact that it feels like I don’t write on here for a day or two, and suddenly it’s a month later. The older you get, the faster time slips.
Last week I noticed that the build process for this site had become chronically broken, with the problem being that apt-get install webp
was coming up with a 404. I thought it was an issue with GitHub Actions, and opted not to try and find a workaround for the time being, hoping it’d resolve itself. I went backpacking, and by the time I got home, it had.
But it wasn’t a GitHub Actions problem. The web apt-get
mirror hadn’t gone down, but rather the package pulled. A critical vulnerability that’d been confirmed to have been exploited in the wild to install Pegasus spyware had been found in libwebp, affecting anything that used it including Chrome, iMessage, and even this site. It was also the root of the zero-click iMessage exploit that’d been reported to exist a few weeks back in early September.
Here’s the commit that fixed the problem and a great writeup on how the exploit worked.
Published sequence 057, May Day.
Published fragment Why to prefer t.Cleanup
to defer
in tests with subtests using t.Parallel
, as enforced by the tparallel lint. The semantics of each are close enough that it’s not obvious why it’d matter, but use of defer
can indeed lead to bugs under some circumstances.
Published On Using Go’s t.Parallel()
, on annotating our Go tests with t.Parallel()
, which has some advantages for DX iteration speed, and when combined with go test -race
, also helps route out tricky data races.
Published sequence 056, floor 03.
Published fragment An email redaction function for Postgres, on derisking somewhat accidentally leaking PII in saved queries.
This is awesome: uses.tech (or see the GitHub repository), a directory of peoples’ /uses
pages, which detail dev gear, software, workspaces, and configurations.
It reminds me (in a very positive sense) of the pre-social media internet. No politics, no outrage, no clickbait, just a bunch of hackers who took the time to build a writeup on how they work because they think it’s cool.
Published fragment Rate limiting, DDOS, and hyperbole, on examining the nature of a 429 (too many requests), and the purpose it serves.
Sqlc 1.19 just dropped. Its headline feature is the new command sqlc vet
that checks queries against lint-style rules:
rules:
- name: no-pg
message: "invalid engine: postgresql"
rule: |
config.engine == "postgresql"
- name: no-delete
message: "don't use delete statements"
rule: |
query.sql.contains("DELETE")
- name: only-one-param
message: "too many parameters"
rule: |
query.params.size() > 1
- name: no-exec
message: "don't use exec"
rule: |
query.cmd == "exec"
It includes a built-in rule that’ll connect to a configured database and prepare every sqlc query against it, which in addition to the parse, adds more certainty that queries are well-formed.
We’ve been on sqlc for a year and a half now. Occasionally I find myself missing the flexibility of an ORM, but every time I do, I recall the obtuseness of DSL and the lack of certainty that what you write is correct. Sqlc by comparison is a little rigid at times, but aside from that, we’ve observed negligible downsides.
Published fragment A TestTx
helper in Go using t.Cleanup
, on an elegant way of combining test transactions with Go’s built-in test abstractions.
Published fragment 100% test coverage, on whether it’s a good or bad thing.
DHH on leaving the cloud for their own hardware. Not exactly a new idea – famously they’ve been preceeded by the likes of Dropbox and GitHub – but novel for a company as small as Basecamp. He says they spent half a million on servers, but stand to save 3x that at $1.5 million every year.
The biggest thing that comes to mind is on-call. Back at iStock in 2011 our ops guys made all hours trips to the datacenters with depressing regularity, so while it made self-hosting possible, the cost was much higher than just the hardware bill. But that was a long time ago, and a lot of it was probably self-inflicted, so maybe things are easier nowadays.
The next thing that comes to mind is blob storage (e.g. S3). It’s such a useful abstraction, and although open-source alternatives exist, it’s a lot. Would even a company hardened in their intention to self-host turn their back on it? I would guess not.
Twitter killed my API access yesterday. I knew it was coming, but I’d been holding out irrational hope that legacy, totally-non-commercial uses of it were being allowed continuity on a case-by-case basis. This deluded fantasy was reinforced by the fact that my integrations survived for months after the official API closure announcements had been made, and they’d seemingly already booted most applications.
Run $(go env GOPATH)/bin/qself sync-all --goodreads-path data/goodreads.toml --twitter-path data/twitter.toml
[INFO] (goodreads) (segment 6) Paging; num readings accumulated: 0, page: 6
[INFO] (goodreads) (segment 3) Paging; num readings accumulated: 0, page: 3
[INFO] (goodreads) (segment 1) Paging; num readings accumulated: 0, page: 1
...
[INFO] (goodreads) (segment 1) Paging; num readings accumulated: 373, page: 25
[INFO] (goodreads) (segment 1) Page 25 beyond known end of 20; stopping
[INFO] (goodreads) Found existing 'data/goodreads.toml'; attempting merge of 373 existing readings(s) with 373 current readings(s)
[INFO] (goodreads) Writing 373 readings(s) to 'data/goodreads.toml'
error syncing all: error getting user 'brandur': twitter: 32 Could not authenticate you.
Error: Process completed with exit code 1.
This is the qself (“quantified self”) project which powered my Twitter archive. It lost the Strava API, has now lost the Twitter API, and is left with only the Goodreads API going, which also stopped issuing API keys in 2020, leaving … nothing?
And the internet, founded on ideals of radical openness, continues to close.
I met with the creators/maintainers of sqlc the other day, and per their request (I promise this wasn’t me complaining randomly, as is usually the case), they asked about the project’s weaknesses. I sent over a few somewhat problematic queries, and of those, I think this is my favorite:
-- name: ClusterGetPage :many
SELECT *
FROM cluster
WHERE team_id = any(@team_id::uuid[])
AND archived_at IS NULL
AND parent_id IS NULL
AND
CASE WHEN @cursor_specified::boolean THEN
CASE WHEN @by_id::boolean AND NOT @descending::boolean THEN id::text > @cursor_threshold::text
WHEN @by_id AND @descending THEN id::text < @cursor_threshold
WHEN @by_name::boolean AND NOT @descending THEN lower(name) > lower(@cursor_threshold)
WHEN @by_name AND @descending THEN lower(name) < lower(@cursor_threshold)
END
ELSE
id = id
END
ORDER BY
CASE WHEN @by_id AND NOT @descending THEN id END ASC,
CASE WHEN @by_id AND @descending THEN id END DESC,
CASE WHEN @by_name AND NOT @descending THEN lower(name) END ASC,
CASE WHEN @by_name AND @descending THEN lower(name) END DESC
LIMIT @max;
Unlike a more traditional ORM, sqlc can’t arbitrarily chain expressions. All the SQL that can be in the SQL is in the SQL, which means that any logical branching has to be done with CASE
/WHEN
expressions.
This monstrosity has a straightforward objective: listing, with pagination. But it does allow pagination along two dimensions (id
and name
), and with a cursor or without, which is where it gets a little gnarly.
That said, compared to pagination logic whose parts to build a complete query are littered across a half-dozen pagination utility modules, and whose totality is obscured by its piecemeal nature, maybe the sqlc version isn’t so bad?
On GitHub’s HQ, their infamous Oval Office, and the battle over its rug. The 2010s were a neat time in San Francisco, with many of the most important products in the world being developed a few blocks from each other, and new offices were coming online one after another, each more extravagant than the last.
A little over a month after the new office opened, one of GitHub’s employees opened an internal discussion thread. A feminist hacker space had launched a crowdfunding campaign with a satirical perk, priced at $50,000: a “Meritocracy is a Joke” rug, custom-designed “for your company’s oval office [sic], to show you don’t support the myth of meritocracy (one of the tech industry’s most prevalent excuses for women and minorities being marginalized).” Given that some people were clearly offended by the word “meritocracy,” asked the original poster, should we be using the term?
A broad lesson on how much easier it is to destroy than it is to create.
Today, that rug is gone, that office is gone, all comparable offices in SF are gone, as is the essential will of the industry to be proud of its values and what it’s creating. Something as quirky as GitHub 3.0 could never be built in the present.
Defending the rug could have been a teaching moment, an opportunity to show why it is important to declare that anyone can do what they put their minds to, even if it isn’t always perfectly executed. It was a small moment, but conceding this point paved the way for more people to tug angrily at tech’s monuments in the years that followed—to which tech willingly folded, each time. Tech needs to find the courage again to embrace its values, which could command more respect from its critics than simply apologizing. If tech can look past the totalizing shame it currently feels, it can more honestly evaluate both its accomplishments and its shortcomings, and find a way to weave them together into a memorable public legacy.
Would love to see it.
Published sequence 055, lighter than air.
Nanoglyph 038 is published, on London and gardening 500s, with a bonus section about the chaos which is walking in the UK.
Since my last post complained about things that haven’t changed in Germany, I’ll follow up with some that have:
Deutsche Bahn now has an app that can purchase or import tickets, and it works well. Ticket checkers can scan a QR code to verify your ticket, where previously they’d only take a paper printout, which was looking very old fashioned in 2010, and a huge pain for visitors that couldn’t manage to fit a printer in their carry-on. It can also purchase tickets for local rail systems like MDV in Leipzig.
The app has real-time platform information for each station. You can check your train’s platform before arriving, and use the information to immediately go to exactly the right place. It sends accurate push notifications, and alerted me to a last minute (literally) platform change in Berlin.
Onboard wifi is now free, and it works.
This was all a major relief after having just experience the comparative hellscape which is the UK rail system.
It’s been years since I was in Germany last, and I was curious to see what’s changed. Particularly:
Germany is a cash-heavy society. This might not be a problem except that no one has any change, no one wants to take card, no one wants to break €50 notes, and ATMs won’t dispense anything but €50 notes. Has anything changed?
The first couple times I went to Berlin, bars were like a time warp back to North America in the 80s and 90s, with an all-pervasive, ever-present cloud of smoke in every one of them. And even outside of bars, you’d find people smoking in every stoop and alley. Has the country stopped smoking yet?
I was optimistic on the first point at least, after having successfully spent two weeks in the UK without having withdrawn a single pound. Surely conventions between European countries can’t be too dissimilar.
But alas, they are. The answers are “no” and “no”. Maybe next time.
After a flight cancellation that snowballed into a logistics nightmare, I took the train from London to Berlin.
Four hours on the Eurostar London to Amsterdam (the fast train best known for making Channel Tunnel trips between London and Paris), and then eight hours on a slower train across Germany’s countryside to Berlin. Wake at 4 AM (to preclear customs for Eurostar), arrive at 8 PM, with a one hour forward time difference. I wouldn’t recommend it, but hey, it’s cool that it’s possible.
Published sequence 052, the Flying Scotsman and locomotive 35018.
Published fragment PGX + sqlc v4 to v5 upgrade notes. Somewhat painful, with 114 files and ~800 LOCs changed.
Nanoglyph 037 is published, on speed as a UX feature.
Discourse sees Ruby rendering speeds decrease 16-17% after enabling YJIT in Ruby 3.2. A nice performance boost for a minimal time investment that amounts to tweaking the RUBY_YJIT_ENABLE
env var and doing some testing.
Published sequence 052, La Grande Arche.
Published sequence 051, Musée de l’Armée.
Published sequence 050, Rue Claude Monet.
Nanoglyph 036 is published, on Atlanta, job queues, batch-wise operations.
Published sequence 048, under construction.
32 consecutive days of running at least 5 km (average 7-8 km).
Observations: Get it done early, habit is everything. For the lazy amongst us (me), it may require bullying yourself out the door. I hate eating my vegetables, but I know that I should.
Tomorrow: Flight to Paris wherein I time travel half a day forward. I’ll do a morning San Francisco run, but one after landing in Paris at 4 PM on the day after, exhausted from the hell that is a transatlantic flight in economy, will be hard. Is this the end? Find out next week.
Published sequence 047, venue.
Published sequence 046, whale shark(s).
Published fragment PG advisory locks in Go with built-in hashes.
Today I discovered the power of Postgres’ DISTINCT ON
syntax, a useful feature that did exactly what I wanted it to with my first try.
Everyone knows the common distinct
function (e.g. SELECT distinct(name)
), which gets all unique values for a given field.
DISTINCT ON
is similar, but it groups rows by a distinct field, and then returns whatever you want from one of them based on an ORDER BY
clause.
Example: we have saved queries that have one or more query runs each. Return the latest query run (finished_at DESC
) that either failed or succeeded for each saved query:
-- name: QueryGetFailedOrSucceededBySavedQueryIDMany :many
SELECT DISTINCT ON (saved_query_id) *
FROM query_run
WHERE status = 'failed' OR status = 'succeeded'
AND saved_query_id = any(@saved_query_id::uuid[])
ORDER BY saved_query_id, finished_at DESC;
Easy to use and concise.
A piece from LWN on the early days of Linux at the University of Helsinki.
On Linus taking down the university’s fragile Sun machines:
At some point, Linux gained support for Ethernet and TCP/IP. That meant one could read Usenet without having to use a modem. Alas, early Linux networking code was occasionally a little rough, having been written from scratch. At one point, Linux would send some broken packets that took down all of the Sun machines on the network. As it was difficult to get the Sun kernel fixed, Linux was banned from the university network until its bug was fixed. Not having Usenet access from one’s desk is a great motivator.
On the version 1.0 release event (complete with press) in 1994:
In the spring of 1994 we felt that Linux was done. Finished. Nothing more to add. One could use Linux to compile itself, to read Usenet, and run many copies of the xeyes program at once. We decided to release version 1.0 and arranged a release event. The Finnish computer press was invited, and a TV station even sent a crew. Most of the event consisted of ceremonially compiling Linux 1.0 in the background, while Linus and others spoke about what Linux was and what it was good for. Linus explained that commercial Unix for a PC was so expensive that it was easier to write your own.
And it was recorded, complete with a young, handsome university student version of Linus.
After a year of inactivity and (more recently) some strange trouble with Mailgun mailing lists, Nanoglyph 035 is published, on Go generics, and our endeavor to build a safer API framework with them. This newsletter may be struggling, but it’s not completely dead, yet.
Linked from HN today, Future Blues, a 90s era Cowboy Bebop fansite, complete with Photoshop layout, shareable banner, and guestbook.
I miss the old internet. What’s available today is infinitely more powerful compared to what we had back then, but in many ways, so much worse. In the old days people built sites on subjects they were passionate about, and doing so was its own reward – this was before the web became monetized, and long before the pursuit of social media clout. There was no Squarespace, so people did their own design and coding, even if they weren’t technologists. You might’ve been hosting on Geocities, but it was also common to be running a web server on the computer under your desk.
Today, 95% of the internet has converged on the same bland big-fonts-in-center-column-with-lots-of-whitespace design (of which this site is guilty), and slathered with paywalls, cookie selectors, login modals, toast notifications, trackers, and so many other dark patterns that they’re hard to keep track of. And that’s the good news. The bad is that most people are skipping the website completely and retreating into the walled gardens of Instagram, TikTok, YouTube, or Twitter.
A hat tip to the few out there who still do it the old way.
And at long last, finished redesigning the main articles pages themselves, for example.
I’ve been intending to do this for at least two years, and for once am glad that I procrastinated on it. If I’d done it even mid-last year, it would’ve been in vanilla CSS instead of Tailwind, and over the subsequent years after that all the same problems around maintainability would’ve reoccurred. The site’s now close to being 100% Tailwind, and I’m more confident that I’ll be able to change things in the future without having to page a bunch of site-specific CSS context back into my brain, and without accidentally breaking anything as styles, you know, cascade.
The redesign is far from perfect (e.g. the tables of contents that were there before are MIA for now), and I’ll be tweaking it as I go, but I wanted to get something out the door instead of continuously delaying it into infinity. I’m in Europe all next month, and while I hope to do many writing dispatches from there, the likelihood is that no major design projects will be forthcoming.
Redesigned my newsletters page. It’s a sad day, but I’m officially calling Passages & Glass defunct. Nanoglyph very well may be too unless I can get one out the door soon.
A Berlin-based prompt artist wins a category of Sony World Photography Awards, and because he’d submitted to make a point, refuses the prize.
This is the tip of iceberg in two ways:
Why does it matter? This is going to cheapen content in a fundamental way. News outlets, social media platforms, and Google search results are going to be flooded with this stuff. Even employers trying to pay for original writing or art will have a hard time telling if the person is lying and just turning in GPT output. There will be job losses. Arguably, they’re here already.
Long term, there will be a loss in human aspiration. Remember how your parents or grandparents knew how to build things in their garage or be able to make/repair clothes, and now almost no one does as we replaced those skills with fast fashion and Amazon.com? How many kids can we expect to spend the thousands of hours learning to write, draw, or make music when they can accomplish better results with instant gratification by prompting GPT? Many would contend that this is fine, humans don’t need to be doing that stuff. But to that I would ask two questions:
A few years ago I wrote Tweeting for 10,000 years, a thought experiment in how to improve the longevity of software, which is infamously prone to breakage by way of bugs or changes in its underlying infrastructure. It explores ideas like choosing platforms likely to be long-lived, stable languages, minimizing dependencies, and deploying self-contained binaries.
In it I list two of the top risk factors for its eventual demise as:
Changes in Twitter’s API could spell the end. This would take the form of a backwards-incompatible change like a new required parameter, change in the structure of responses, or adjustment to how applications authenticate.
Relatedly, changes in *Twitter’s product are also dangerous. They could move to a new pricing model, remodel the product’s core design, or fold as a company.
Well, if it’d made it 10,000 years you probably wouldn’t be reading this. Today I got this email:
This is a notice that your app - 10000-years - has been suspended from accessing the Twitter API.
Please visit developer.twitter.com to sign up to our new Free, Basic or Enterprise access tiers.
The experiment’s final longevity was 4 years, 8 months. Not half bad, but a little short of the stated goal.
It may be for the best. What I put in those time capsule messages probably wouldn’t have aged well anyway. But the article’s thesis was correct – writing software that withstands time and entropy on all fronts sure isn’t easy.
Today I learned that Rack servers like Puma can run an operation after a request has been completed with rack.after_reply
. There’s been various ways to accomplish this since the early 2010s, but I hadn’t realized that it’d become standardized.
This can be a great way to offload more expensive operations like network calls out-of-band from user requests to improve latency (even if incompatible with use of request-level transactions). In the linked GitHub describes using rack.after_reply
for emitting statistics, and they’d previously used a similar technique to perform Ruby garbage collection between requests, although later stopped doing that.
At Stripe we did something similar to make generating “events” (for use in sending webhooks) out-of-band. Generating events involved rendering API resources, which was expensive, making generating events also expensive, so taking them out-of-band shaved somewhere in the neighborhood of 10-100 ms in latency (can’t remember the exact numbers) off some of our most critical API paths.
Here’s me evangelizing the idea of an ephemeral database one more time.
When we first put ours in, I wasn’t totally convinced that it was the right decision. Keeping high volume data out of the core path seemed like a good idea in a conceptual sense, but another migration line and another database to look after would make development and operations harder.
Months later, I’m certain it was the right decision. Our core database (accounts, clusters, subscriptions, billing, audit log, etc.) is stable at less than 1 GB of data, while the ephemeral DB has ballooned to 300 GB by virtue of the kind of raw, voluminous data it holds (metrics). Having the two operationally independent in case of a major catastrophe helps me sleep better at night. If our ephemeral DB were to go hard down, metrics pages wouldn’t load, but everything else would work just fine. Maintaining a separate migration line for the ephemeral DB is mildly annoying, but that annoyance is so mild that I’ve stopped noticing. A single make
target bootstraps both databases for development/testing, and 95%+ of database changes happen on the main line.
Software engineering is tradeoffs abound. But I’m giving ephemeral DB the platinum award of computation – major benefits, and with downside so miniscule that it’s hardly noticed.
Published fragment Lean = fast, on the unreasonable effectiveness of small teams and sharp tools for getting features out the door.
Updated now with recent features shipped and a new run commute.
Following up from yesterday, redid the fragments index. More consistency. More Tailwind. Less hand-written CSS that I’d given up on being able to maintain years ago. Only like three quarters of the site left to go.
I redesigned the layout for fragments to bring it more in line with the “V2” redesign used for atoms and my now page. I hadn’t intended to do this today, but started prototyping this morning, and it got far enough along that it got to a point where although far from perfect, I was just like “why not”.
See for example, Ephemeral DB.
Glaciers flow faster than this redesign is coming along, but it’s seeing slow progress forward.
Two weeks into a WeWork subscription. An aspect I love is the gear. Most people are just using a pretty vanilla laptop set up like me, with a large minority using a laptop stand and external keyboard.
But some go all the way, with large form factor notebooks, tablets, and external screens, sometimes multiple of each. They sit down, start pulling things out of their bag, and within minutes a fully operational battlestation appears. They can be creative, but unlike a traditional desk set up, there are constraints – it all has to be portable. Calibrated in the morning. Collapsed in the afternoon. No trace left behind.
Pragmatic, minimalist, and with a dash of Marie Kondo.
Published fragment Stay mainline, on GitHub’s practice of keeping their code synchronized with Rails main
.
Published sequence 044, indomitable.
Okta is considered to be the gold standard in the identity management front, the sleek Silicon Valley entrant to a market of long-toothed competitors.
I was looking into building an Okta integration for inclusion in the Okta Integration Network (OIN), and something that I found incredibly surprising is how manual the whole process it. Intuitively, I would’ve thought that an Okta admin would browse the catalog, install an app, and be automatically granted access to its features as machines cooperated behind the scenes to make it happen.
Here’s how it actually works: an Okta admin browses the catalog, installs an app, rummages around in Okta dashboards extracting a set of URLs, IDs, and secrets, then manually transmits them to the app owner to facilitate integration. CloudFlare provides an Okta-specific Dashboard to enter them. Brex will send you an email where you copy them in. Others suggest visiting their sites and opening a support ticket with the magic values.
I thought I (and all of them) must be missing something, but I don’t think so. Here’s Okta developer support confirming this is how it’s done.
Does this seem, like, crazy, to anybody else? Of course there’s still going to be some manual inputs on the web, but for this to be the normal process for most preeminent IDM on the internet? Again, surprising.
A few weeks ago Tailscale announced they’d be allowing anyone to bring their own OIDC provider. My first reaction to this was that it seemed like overkill more likely than not to create unforeseen problems. But after looking into Okta, my new train of thought is that if we were to build that, we’d not only provide the openness of OIDC standard, but it could act alternatively as our Okta onboarding portal for free. Is this the way?
For the last year, absent an even semi-reasonable work-from-home set up, I’ve been working in and out of cafes across town. It’s okay, but it leaves you feeling like a vagabond, always thinking about whether you’ve bought a coffee recently enough for the workers/owners to not be metastasizing latent resentment towards you, and always negotiating for wi-fi and bathroom codes.
Last week I finally got sick of it and got WeWork. Paired with a shiny new gym membership, it’s had the major benefit of letting me build in a morning run commute, guaranteeing daily exercise, and leaving me with an intense mental acuity for the first couple hours of the day, an effect now attributed to endocannabinoids that are part of the runner’s high.
Modern technology is great. I have WeWork All Access, meaning I can’t leave anything at the office. My pack contains deoderant, a change of clothes, and three pieces of technology:
The M1 (paired with an eye on Activity Monitor for power offenders) gives me all-day power and then some. I don’t carry charging cables or any other peripherals. The grotesqueries which were the butterfly keyboard and Touch Bar are finally dead, so no external keyboards required. With built-in retina display, fast Cmd-Tab, and good window management I don’t miss an external monitor.
This seems obvious, but even five years ago it would’ve been a pipe dream. All this technology is cool, but I have to give the tech-of-the-decade award to the M1, which came out of left field, and has fundamentally changed the expectations around responsiveness and power requirements in computation.
A few weeks back I’d mentioned in passing that we’d migrated off Keycloak.
I got a question about why, so I’m putting down some rationale here. I respect open-source projects a lot, so this isn’t meant to throw shade on Keycloak in any way, but I believe in being transparent about these things so people can read alternative arguments and make better-informed tech decisions.
A few reasons why we moved away from Keycloak:
It has an API for integration, but its API reference offers precious little beyond the names of API resources and fields. There was never enough documentation to integrate a new feature, always necessitating trial and error.
Core authentication-related flows were the most likely to regress because writing tests against a remote API requires a lot of stubbing. With heavy stubbing involved, you really only end up writing tests that verify that your stubs are set up the way you expected them to be, and that made it very easy to break things because mistakes would not be caught by CI.
Keycloak pages like login or email verification are customized using a template language that was divergent from our main Frontend stack, and which no one was every particularly excited to write. They’d languish without updates and look awful compared to the rest of the site. This is also why some features were slow to happen, like multi-factor support, which we’ll finally be shipping next week.
Operationally, it was a black box wherein if something went wrong, no one knew how to fix it. We had a very near miss where a Heroku regression was crashing Keycloak on start up with an inscrutable backtrace, and which luckily manifested in our staging environment first, but which we knew was coming for prod on the next 24 hour dyno cycle. None of the people who’d chosen and integrated Keycloak had even the faintest idea how to diagnose the problem or fix it. In the end we were able to debug it by digging deeply in Keycloak source code, but were within hours of our login system being hard down with no fix in sight.
I wasn’t super confident of its security bonafides given the glimpses into its internals I’d seen. For example, it defaults to PBFKD2 for password hashing at 27,500 iterations, far below the 2023 recommendation from OWASP of 600,000 iterations. I admit to having only this one example and maybe everything else is above bar, but who knows.
It’s a huge Java app and not that cheap to run. It needed a performance dyno on Heroku due to high memory requirements, and between that and a database in staging and prod it was $800/month, or about half our total bill.
I’m usually against the NIH mindset, but sometimes it’s right. Looking up best practices for storing passwords is easier than looking up how Keycloak does it, the answer to which isn’t documented and only exists in Discourse forums or by reading source code. Integrating SSO isn’t that hard anymore thanks the widespread standardization around OIDC. Implementing flows for email verifications and password resets is a little more work, but you end up having to do most of it under Keycloak anyway if you want a custom theme.
DPReview.com is shutting down on April 10th.
G’damn. This is one of maybe three to five websites that I visit every day. It’s well known for its in-depth reviews, but was also one of the few sites out there that hadn’t degenerated into the labyrinth of clickbait and dark patterns that is the modern web. If I wanted to see which lenses Canon’s released this year or last, DPreview is the only place I’d think about getting that information, far more usable than even Canon’s official site.
I’d forgotten that Amazon had bought them – a shame. An underperforming division of the trillion-dollar everything-store company might be a reasonably sustainable independent business on its own terms.
Published fragment on last week’s layoffs from Meta, on Mark Zuckerberg’s words on manager-heavy organizations and remote work.
The crypto crowd is at it again, with the latest being Balaji saying he’d bet $1M that Bitcoin will be worth $1M within 90 days (current price is $26k). Multiple high net worth individuals have committed to taking the bet. I assume what happens next is that the idea is memoryholed, but this could be an epic win for crypto-skeptics if it takes.
Published fragment Policy on util packages.
Stated simply, general util
packages aren’t allowed because they tend to become dumping grounds. But, specifically targeted util packages with narrow, focused domains, are.
util/
assetutil/
cookieutil/
cryptoutil/
emailutil/
jsonutil/
maputil/
passwordutil/
ptrutil/
randutil/
signingutil/
sliceutil/
stringutil/
testutil/
timeutil/
uuidutil/
Published fragment CL.THROTTLE
implemented in DragonflyDB, in which the rate limiting command from redis-cell is implemented in DragonflyDB.
CL.THROTTLE user123 15 30 60 1
▲ ▲ ▲ ▲ ▲
| | | | └───── apply 1 token (default if omitted)
| | └──┴─────── 30 tokens / 60 seconds
| └───────────── 15 max_burst
└─────────────────── key "user123"
Published fragment RFCs and review councils, prompted by Squarespace’s write up on their RFC and review process, which sounds very close to Stripe’s.
Published sequence 043, outpost.
Downskill skiing is a very uneven sport. Trips are booked months in advance, and the time you get there, you’re at the mercy of the weather gods. Occasionally you can be buried in snow, like the Tahoe region was this last week with 140”+ of fresh powder. Other times you get nothing, and have to find ways to make your own fun under scratchy conditions.
After years of bad luck skiing BC’s interior, we finally got a good one. A couple big dumps last week before we arrived to establish a base, and fresh snow Sunday, Monday, Tuesday, and Thursday while we were here. What a year.
A small CI improvement we made last week: when running Go workflows we’d generally define a variable for GO_VERSION
near the top and pass it into each actions/setup-go
below:
env:
GO_VERSION: 1.20
steps:
- name: Install Go
uses: actions/setup-go@v5
with:
cache: true
check-latest: true
go-version: ${{ env.GO_VERSION }}
GitHub Actions don’t allow steps to be extract and moduralized in any way, so with many jobs in the workflow, the actions/setup-go
step needs to be duplicated many times over. Defining GO_VERSION
meant that when upgrading to a new version of Go, we only had to change one value in the file.
But it still wasn’t optimal because when upgrading to a new version of Go, we’d have to change the version in go.mod
(twice actually, because we’re on Heroku and Heroku’s dumb +heroku goVersion 1.20
magic comment is mandatory), and then also in every GitHub Actions workflow .yaml
file. It’s easy to forget one and break something.
There’s an easy solution. It turns out that actions/setup-go
supports a go-version-file
directive that reads a version out of a go.mod
file:
steps:
- name: Install Go
uses: actions/setup-go@v5
with:
cache: true
check-latest: true
go-version-file: "go.mod"
Now when we upgrade to a new Go, only one file is changed.
From Amazon Route 53: Workload isolation using shuffle sharding.
In short, Route 53 domains are sharded by so they map to one of 2,048 virtual servers. The sharding process means that some domains share servers with other domains, and under a normal sharding algorithm, domains would share all their servers with those same domains on their shard. This isn’t great because if I’m an unlucky customer on the same shard as github.com who’s getting DDOSed all the time, if their shards are taken down that means I’m down too. “Shuffle sharding” defines sharding such that even if I share servers with github.com, some of the servers I map to are “shuffled” so that I’m also guaranteed to not share some with github.com. Even if every server hosting github.com goes down, some of mine are still up.
The link above explains it in more detail and with diagrams.
Last week, in a change presumably driven by LastPass’ recent compromise, Bitwarden released support for Argon2. We’d moved to Argon2id just two days before. Some good news about LastPass’ loss is that it’s been everybody else’s gain as other products take notice and shore up their own security.
After a three week odyssey involving cascading dependency failures and 24-hour build loops, Go 1.20 is released on Homebrew.
Published fragment Findings from six months of running govulncheck
in CI.
Vulnerability #1: GO-2023-1571
A maliciously crafted HTTP/2 stream could cause excessive CPU
consumption in the HPACK decoder, sufficient to cause a denial
of service from a small number of small requests.
More info: https://pkg.go.dev/vuln/GO-2023-1571
Module: golang.org/x/net
Found in: golang.org/x/net@v0.6.0
Fixed in: golang.org/x/net@v0.7.0
Call stacks in your code:
Error: client/awsclient/aws_client.go:156:34: awsclient.Client.S3_GetObject
calls github.com/aws/aws-sdk-go-v2/service/s3.Client.GetObject,
which eventually calls golang.org/x/net/http2.noDialH2RoundTripper.RoundTrip
A follow up from yesterday’s post on GitHub being close to having ended up as a Postgres stack, somebody pulled Tobi from Shopify into the conversation, and it turns out Shopify was a near miss on Postgres too:
Yea. Shopify beta was on Postgres. But in our case it was the poor state of replication (in 2005) that made me switch to mysql before launch.
Published fragment Honest health checks that hit the database.
errGroup, ctx := errgroup.WithContext(ctx)
errGroup.Go(func() error { return checkDatabase(ctx, svc.Begin) })
errGroup.Go(func() error { return checkDatabase(ctx, svc.BeginEphemeral) })
if err := errGroup.Wait(); err != nil {
return nil, apierror.NewServiceUnavailableErrorf(ctx, "Health check error: %v.", err)
}
In a surprise twist, Chris Wanstrath (@defunkt), one of the founders of GitHub, has taken back to Twitter after a long dormancy. Better yet, the subject matter is entirely about engineering, product, and occasionally metal. For example:
Last tidbit on DMs: We almost moved from MySQL to Postgres early on, and even had a branch going, but some of the horrible SQL queries I wrote for the private messaging feature didn’t work in Postgres. So, MySQL stayed.
You have to wonder about the counterfactual there, in which GitHub switched to Postgres and had to scale it, possibly having to develop a Vitess-like layer on top.
GitHub may just be the only decent candidate for the Apple of our generation, a company that did product well, continues to do it well, didn’t fail or turn evil, and whose history is something you’d actually want to read about. Chris, maybe do a book next?
The latest in the accidental culture war: Twitter is sunsetting SMS as a 2FA method.
As usual, the I-hate-whoever-WaPo-tells-me-to crowd immediately lost their minds, flying into spectacular fits of performative outrage, dramatically falling to the ground and beating their tiny fists against the floor wailing about Elon compromising security for society’s most vulnerable, like the downtrodden legacy blue-check elite, whose refusal to pay $8 puts them on moral parity with the conscientious objectors who fled north to dodge their draft for the Vietnam War.
For anyone who doesn’t follow this closely, SMS isn’t just the worst way to do second factor auth, it’s the worst by far:
If you’re a user, you shouldn’t use it. If you’re a provider, you shouldn’t provide it.
Sure, Musk’s motivation for retiring it is probably mostly (4), but that doesn’t make it a bad idea. Sure, it’s weird that Twitter Blue users can still use it, but that’s Twitter saying, “if you pay money, you have a license to do whatever you want, even opt into reduced security (at your own risk)”. You also have to think that paying users are < 1% of the platform, so their continued option to use SMS has negligble effect on the whole.
One more note on password migrations: damn it, we did find one bug. Luckily, an internal user ran into it early and it was never experienced publicly.
We run our Postgres service on Postgres, and develop like we prescribe others to, with a core tenet being data consistency. I’d added these CHECK
constraints:
ALTER TABLE account
ADD CONSTRAINT pbkdf2_not_null_check CHECK (
(algorithm <> 'argon2id-pbkdf2-sha256' AND algorithm <> 'pbkdf2-sha256')
OR ((algorithm = 'argon2id-pbkdf2-sha256' OR algorithm = 'pbkdf2-sha256')
AND (pbkdf2_hash_iterations IS NOT NULL))
);
ALTER TABLE account
ADD CONSTRAINT pbkdf2_null_check CHECK (
(algorithm = 'argon2id-pbkdf2-sha256' OR algorithm = 'pbkdf2-sha256')
OR ((algorithm <> 'argon2id-pbkdf2-sha256' AND algorithm <> 'pbkdf2-sha256')
AND (pbkdf2_hash_iterations IS NULL))
);
They’re a little awkward to read, but the first guarantees we have a value for PBKDF2 hash iterations where the password involves PBKDF2. The second guarantees we don’t value a value where it doesn’t.
There was a case in the password reset flow where we’d upgrade a newly reset password to Argon2id, but if the old password was PBKDF2, forgot to empty its PBKDF2 hash iterations. The CHECK
constraint failed along with the password reset.
We got a fix out quickly, but there’s an argument to be made that it wasn’t worth breaking a critical path (even temporarily) for data consistency.
The cost/benefit of this case is close enough that I might agree, but would still form a counterargument. A 500 is bad, but when this sort of thing would happen at Stripe (on Mongo, no CHECK
s), instead of some 500s you’d end up with a pile of inconsistent data written that someone would have to manually repair afterwards. A small error that was in production for only a minute or two would lead to a multi-week cleanup effort, with many a bespoke migration and outbound email written.
When CHECK
s (or FKs, data type validations, etc.) fail, no tainted state makes its way to the database. There’s user impact until the problem is fixed, but no impact afterwards. Also, even if a user request succeeds in the non-CHECK
alternative with inconsistent data written, it doesn’t mean that things are actually right, and often leads users having to find out for themselves that something’s broken thanks to the inconsistencies.
We’ll probably keep doing the CHECK
s.
Follow up from yesterday’s write up on password hash migration. We’re fully migrated:
=> SELECT distinct(password_algorithm) FROM account;
password_algorithm
------------------------
(null)
argon2id
argon2id-pbkdf2-sha256
(Notice: No vanilla pbkdf2-sha256
left. (null)
is for password-less SSO-based accounts.)
I captured a backup beforehand just in case, but otherwise went for a pretty aggressive migration strategy of ovewriting hashes without intermediary fields or other major hedges. It went smoothly. Here’s a log line showing a hash upgrade to Argon2id-only, implying the user was able to sucessfully login via the nested hashing scheme:
password_hash_upgrade_line: Upgraded from "argon2id-pbkdf2-sha256" to "argon2id"
[account: 03c97908-32a3-445b-a52e-03aa9fdec5f8] [hash time: 0.024923s]
Some guards I put in:
Published a follow up to my write up on password hashing: Migrating weaker password hashes by nesting them in an outer hash.
Well worth reading: It’s so sad when old people romanticize their heydays, also the 90s were objectively the best time to be alive.
Don’t get me wrong, I’d hate to give up all the great stuff I’ve gotten access to over the last twenty years: access to all information ever produced by humanity a pocket’s reach away, portable computers with functionally infinite computing power and battery life, AirPods, ANC, Google Maps, cameras/lenses with better low light performance than my eye, Amazon Prime, and many more (notably, all of which are advancements in consumer electronics rather than hard science …).
But while these things have indisputably made things more convenient, they haven’t made things better.
The wins are small and nominal and the losses are large and profound. Community and social cohesion is at all time lows. Low key meeting spaces like malls, music venues, rundown bookstores, and dive bars are mostly gone – real estate draws such a premium that no one can afford to have it go to waste. Everywhere you go, every man, woman, and child buried in their personal glass.
Check out the article. It’s more convincing than I am.
Published fragment adventures in password hashing, in which we inherit PBKDF2 hashes from Keycloak, and after a couple intermediate steps, migrate to Argon2id.
Published sequence 042, a wall alive.
Seymour Hersh writes about how the US destroyed the Nord Stream pipeline, and although it’s built around the testimony of only a single source, it contains impressively specific details, like how the C-4 was planted by divers under the cover of the BALTOPS22 naval exercise, and was to be triggered by a sonar buoy that’d later be dropped by plane.
As every Maddow-loving, Fauci-fearing, big-D Democrat knows, this is a filthy lie. It was Russia that blew up Nord Stream for … reasons. They didn’t want to make money selling gas so badly that simply pressing the “off” button on the pipeline which they own was insufficient. No, it had to be a bolder, and shall we say, more incendiary statement. KA-BOOM! That Biden went on record a year ago saying he’d destroy it makes no difference. It’s Russia, stupid.
An interesting Twitter thread documents a Wikipedia edit war in real-time in which Mr. Hersh changes from “journalist” to “conspiracy theorist”, a favored new term of art of the illiberal, institutional left that translates roughly to English as “person who disagrees with me”. Most notable is how when the change was at risk of being reverted, powerful editors with special moderation privileges emerge from the woodwork to defend the presence of the new appellation.
On the day of GitHub announcing total closure of all physical offices, I published sequence 041, chez GitHub.
Published sequence 040, on a rainy evening in San Francisco.
This site uses AWS Certificate Manager (ACM) for automated TLS certificate managment, and I got a request via email today to approve a renewal through 2024. Validating ownership of a domain can either happen via email or by a CNAME
record installed to its DNS, and all those years ago when I first set this up I apparently chose email and have been manually approving a renewal every year since.
Validation by CNAME
is better because it’s fully automatic. As long as ACM sees that the CNAME
is still installed, it can validate ownership of the domain without asking the user about it. So today, after many years of procrastination, I finally went into ACM to generate a new cert validated by CNAME
instead of email.
While doing so, I noticed that as of mid-2021, ACM supports non-RSA certificates using ECDSA elliptic curves, which I opted into. They’re widely supported, and superior to RSA in every respect that matters: less computationally expensive to verify, shorter keys that mean reduced network traffic, and superior cryptography that was developed more in the open compared to RSA.
But as complete of a product as AWS is, sometimes it has some fairly mistifying edges. I was given the choice between the P256 and P384 curves, opted for P384, but then found that I couldn’t select my new certificate over in CloudFront (AWS’ CDN). Upon further reading I found CloudFront only supports P256. This limitation is not explained or justified. Just a vague “sorry, P256 only even though our other product generates P384”.
Amazon is an absolute powerhouse beyond all doubt, but is the weirdest of the trillion dollar cohort.
Published fragment PartialEqual
, on a custom Go assertion helper that we’ve been using for a few months now to good effect:
resp, err := apitest.InvokeHandler(svc.Create, ctx, req)
require.NoError(t, err)
prequire.PartialEqual(t, &apiresourcekind.Cluster{
Environment: ptrutil.Ptr(dbsqlc.EnvironmentProduction),
IsHA: ptrutil.Ptr(req.IsHA),
MajorVersion: req.PostgresVersionID.Int32,
PlanID: req.PlanID,
RegionID: req.RegionID,
Storage: req.Storage,
TeamID: req.TeamID,
}, resp)
An HN discussion on unlimited PTO.
We ran under unlimited PTO at Heroku the whole time I was there. At Stripe we started with unlimited PTO before moving to fixed. Stripe’s PTO allowance wasn’t generous, but it worked out okay in the end as I’d had some built up when I left which they ended up having to pay out.
A lot of commenters are so cynical that they view unlimited PTO policies as ill-intentioned, designed to avoid a PTO payout like mine. Maybe that’s the case for some companies, but in the ones I worked for, it was good-intentioned with suboptimal outcomes.
You might intuit that the problem with unlimited PTO would be that it’d get abused, and I’m sure that’s happened, but in my experience, the types of people who’d abuse such a system don’t tend to be hired at the types of companies that’d offer it. I’m entirely sure that unlimited PTO worked against me the entire time I was under it, and the same was true for most of my colleagues. It wasn’t entirely the fault of the policy. To succeed under such a system, you really need to be looking out for number one it, tracking your own vacation target carefully, and championing for yourself.
But of course it’s difficult, especially when you see your colleagues working so hard, which makes you feel bad about taking time off. And unless you’ve got a vocally pro-vacation manager, which is unlikely, you always have to wonder what they think about someone taking more than average.
A lot of companies have moved back in the direction of fixed PTO, which is probably for the best. Years ago Travis CI implemented a minimum vacation policy which is an idea I liked, but things didn’t turn out so well for that company, and I haven’t heard of much about minimum PTO since.
Go 1.20 is released. From the notes:
Work specifically targeting compilation times led to build improvements by up to 10%. This brings build speeds back in line with Go 1.17.
Excellent. New features are nice, but it’s the boring fundamentals like build times that really make a difference for everyday users. The addition of generics in Go 1.18 had an impact on build times, but those losses have been recouped. Generics are good. Fast builds are good. We can have both.
HBO’s The Last of Us televisation is our latest cultural zeitgeist, and I’m watching it like everybody else. Last night’s episode (ep 3) was the best so far, although it’s notable what they changed to appeal to a broader audience.
— SPOILERS —
Without going into the specifics, sufficed to say that Bill and Frank’s life in the game version is more antagonistic and stark. You can spoil yourself here, but the last piece in Bill’s ark is grim, and it’s not because he dies or is injured. He’s also more abrasive, and exchanges some great one-liners with Ellie as they insult each other. The TV version portrays a far more idyllic life and very gentle redemption ark. It plays on your emotions well and I liked it better overall, but it does some to detract from driving home the brutal edge of The Last of Us’s world.
By the way, I love how they kept the game’s military-style angle-head flashlights looped onto their backpack straps (see Ellie’s left shoulder in the image below). You can actually buy these things.
See: PEP 703 – Making the Global Interpreter Lock Optional in CPython.
Fascinating stuff, and the detail and attention that goes into these PEPs is just incredible. The GIL (global interpreter lock) has been the age-old Achilles Heels of Python and Ruby, and given the advanced age of these languages, I’d assumed they were problems that weren’t going to be solved as an increasingly intricate implementation makes more fundamental changes harder to tackle every year. But an increasingly parallel world of computing seems to have created demand to attract people with the necessary gumption to pull this off.
Ruby gave up on the GIL in favor of Ractors which suggest the use of many parallel environments, each with their own GIL. They might’ve worked, but were extremely backward incompatible, and two years later, nothing supports them.
Python’s optional GIL will be opt-in via compiler flag, but provides a path for the ecosystem to migrate incrementally so it could be default in a not-so-distant future.
A couple things that stood out to me. The implementation will move to a stop-the-world GC to freeze necssary state:
When running without the GIL, the implementation needs a way to ensure that reference counts remain stable during cycle detection. Threads running Python code must be paused to ensure that references and reference counts remain stable. Once the cycles are identified, other threads are resumed.
And because the GC becomes stop-the-world, its generational GC is disabled:
The existing Python garbage collector uses three generations. When compiling without the GIL, the garbage collector will only use a single generation (i.e., non-generational). The primary reason for this change is to reduce the impact of the stop-the-world pauses in multithreaded applications.
Single-threaded performance takes a 10% hit, but there’s already suggestions for changes to recoup some of that. Over time most of that likely gets optimized away.
Published sequence 039, on the site of the bird site.
Two years ago in the depths of San Francisco Covid-mania, I wrote up some predictions on what would happen to the city over the medium term. Today I published Revisiting my two-year SF predictions, and by my count, got 7 out of 9 right.
Some of these seem obvious in retrospect, but I wrote them down because they were starkly contrarian compared to claims made by the city’s commentariat. A popular opinion at the time was that tech needed San Francisco, so the longest and hardest lockdown in the nation would be perfectly okay as a closed downtown would spring right back to life on a word from London Breed, like a dog waking on command. Policy apologists guffawed with dramatic bemusement at the very idea that any of these departures could be permanent. Today, we see that almost every one of them was.
Network effects in cities are powerful, and that indeed has been the story of SF and the Bay Area writ large over the decade as companies swooped in soak up innovative energy, capital, and talent, which further compounded and cascaded. My prediction today is the city will continue to observe a reverse network effect as more companies give up expensive leases which have continually lessening ROI.
Published sequence 038. You look nice today.
Posted today: Stripe Sets One-Year Timetable to Decide on Going Public. Along with this classic Stripe press “leak”, the company also sent a simultaneously email to alumni, which honestly, was nice of them.
Predictably, there was a lot of carthasis from ex-employees, with even more chatter than during the 14% layoff a few months ago, and a hundredfold above ambient Slack levels. With the first batch of RSUs set to expire early 2024, there’d been a lot of consternation over the last couple years to say the least, so it was a big event.
It’s unambiguously a good thing, but I couldn’t help but notice that even with this BEST NEWS EVER message, the company is still tacking into its usual non-commitalism. Instead of unambiguously planning an IPO, it’s “either an IPO or private market transaction”, leaving huge error bars and uncertainty. Maybe something cynical, or maybe just unavoidable given the unpredictable market conditions of 2023. Hopefully, elite 4-D chess in pursuit of profitable ends that lowly grunts like myself aren’t privvy to, and couldn’t possibly understand.
Is this a food picture blog now? I hope not. I’ll try not to make a habit of it.
My curiosity was piqued when I found out there’s a Mensho subsidary in the ground floor of the Twitter building. Mensho’s a ramen company out of Japan that opened a shop called Mensho Tokyo a few years ago in San Francisco, notable for having the best ramen on this continent, and an omnipresent 50+ person lineup out the door and down the street, something I thought would clear with time, but never did. In 2021, they expanded into Twitter’s ground floor with a new installment – Jikasei Mensho.
Unlike the original, Jikasei Mensho’s more of a fast lunch stop sort of deal, and as far as I can tell, not great? You walk up to a counter, order from a computer McDonald’s-style, and your minimum tip option is 15% on a $22 bowl of fast food. It arrives in a plastic bowl. The chashu was good, the eggs were good, but the noodles and broth subpar, and some of furnishings questionable, like sliced lemons. Maybe I happened across it on an off day, or maybe ramen tastes worse out of plastic.
Post-Elon return-to-office Twitter is a bit more lively. There were quite a few people down in the cafeteria area where Jikasei Mensho is, where only a few months ago it all sat empty.
Published sequence 037 on the stairs at Oyster Point.
Witnessed a laptop snatching this morning. Woman is sitting next to window in a crowded cafe. Thief comes crashing in, grabs woman’s computer, and bolts. Five people run out after him, but as with a lot of criminal activity in San Francisco these days, it was organized, and a getaway car was waiting at the corner. Thief slams car door shut and rides off into the sunset (not literally; this was 11 AM).
It sounds blasé described post-hoc, but even “just” a theft with nobody hurt (luckily, it could easily have gone the other way) is hectic in real time. Downshifted mentally at the time in computing mode, it took a good ten seconds to process what’d even happened. People yelling. Tables knocked over. Coffee cups in pieces on the floor.
Whenever something like this comes up, San Francisco apologists are quick to make platitudinal statements like “report it to the police”, as if this is some kind of deep insight that only an enlightened California pro(re)gressive could have imagined. The situation was a perfect microcosm for why most people have stopped bothering. The SFPD was called immediately, and I waited around an hour without a single officer showing up, despite being within six blocks of a station, a cruiser rolling by every few minutes, and a situation that easily could’ve ended with someone seriously hurt. The crime had dozens of witnesses and clear HD footage from two separate cameras, but even if the PD did eventually appear, no one will be caught, let alone see a day in prison.
I started chatting with the guy next to me, who having just moved to the city a week previous was somewhat surprised. (But not that surprised, having gotten an accurate taste of the city’s culture from various viral videos.)
I told him that this wasn’t unknown, but not common. San Francisco, intent to preserve its reputation as deteriorating municipal hell stemming from the worst mismanagement of wealth in a thousand years, made sure to prove me wrong. Five minutes later, a woman walks in, grabs a handful of bills from the tip jar, and before anyone can react, leisurely strolls back out.
Published sequence 036 on 510 Townsend St.
I finally finished God of War Ragnarök this weekend. It’s a great game, with creative interpretations of Thor and Odin that turned out very well. Thor’s a lumbering giant who’s highly able, even if overly loyal to his father. Odin’s got mob boss vibes, which is unconventional, but a gamble that paid off. The game is huge, and I was surprised multiple times thinking I’d gotten to the end only to find a whole new world to explore.
I 100%‘ed it unlike the 2018 God of War, which meant that I wasted a ton of time following YouTube guides to find artifacts and ravens. I downgraded the difficulty to “give me grace” for the last two optional bosses and don’t feel bad about it. They spam attacks like no tomorrow, have a criminal number of unblockables, and the target lock system kind of sucks, especially in multi-enemy fights.
It’s notable that although very good, for all intents and purposes it’s the same game as the one released in 2018. There’s new story and new areas (although with some reuse), but the game engine, game mechanics, upgrades/skills system, and combat are all practically identical. Development for 2018’s started in 2014, which means that back then it took five years to build a full game and whole new engine, wherein Ragnarok was six for a game plus some minor updates. I’m sure some of that was lockdown delay, but I kind of suspect the game industry is hitting a plateau for large projects in the same way NASA did for space missions, Lockheed Martin for fighter jets, or Oracle for databases. Horizon Forbidden West (another triple-A title) was the same – great game, but practically indistinguishable from the original.
I’ve been writing Go professionally for something like a year and a half now, and compared to my previous daily driver Ruby, almost everything is better. Readability, speed of runtime, speed of tests, speed of refactoring, IDE insight, tooling – I could go on all day.
But when building out a larger-scale app with a non-trivial amount of domain logic and objects, it’s got holes. Some are as wide as Jupiter, and I can’t believe how little I see about them online.
The biggest I’m looking for an answer to is data loading, or more specifically, how to do data loading without thousands of lines of extraneous boilerplate. We solved our SQL-in-Go problem by moving to sqlc, but that in itself isn’t enough. I still write code like this daily:
team, err := queries.TeamGetByID(ctx, uuid.UUID(req.TeamID))
if err != nil {
return nil, xerrors.Errorf("error getting team: %w", err)
}
if team.OrganizationID.Valid {
org, err := queries.OrganizationGetByID(ctx, team.OrganizationID.UUID)
if err != nil {
return nil, xerrors.Errorf("error getting organization: %w", err)
}
if org.MarketplaceID.Valid {
marketplace, err := queries.MarketplaceGetByID(ctx, org.MarketplaceID.UUID)
if err != nil {
return nil, xerrors.Errorf("error getting marketplace: %w", err)
}
return nil, apierror.NewBadRequestErrorf(ctx,
errMessageTeamDeleteMarketplace, marketplace.DisplayName)
}
}
In Ruby (or any language with some dynamicism and a good ORM), that whole block compacts comfortably to a single line of code:
raise ... if Team[req.team_id].organization.marketplace
And the Go version is only as short of it is because our foreign keys mean that we can skip handling some types of user-facing errors. i.e. We don’t have to worry about translating a missing organization to a 404 because foreign keys guarantee its existence. Sqlc also saves hundreds of lines – before the Go code for every one of those queries like TeamGetByID
had to be written by a human and tested. Now we write the SQL and let sqlc do the work, but using Go built-ins database/sql
you still get to do all of it.
Another bad-but-unavoidable Go pattern is preloading objects to avoid N + 1 queries, but then having to manually map them into structures that your code can actually use:
teams, err := queries.TeamGetByIDMany(ctx, sliceutil.Map(unsentInvoices,
func(i dbsqlc.Invoice) uuid.UUID { return i.TeamID }))
if err != nil {
return xerrors.Errorf("error getting teams: %w", err)
}
teamsMap := sliceutil.KeyBy(teams,
func(t dbsqlc.Team) uuid.UUID { return t.ID })
Again, with an ORM this is:
Invoice.load(..., eager: [:team]).each { |i| i.team.name }
And the Go code was >2x longer just one release of Go ago. Notice the functions sliceutil.Map
and sliceutil.KeyBy
which were impossible without generics. Before, these were an initialization and a for
loop – 4-5 lines each.
I’ve experimented with custom data loading frameworks that sit a layer above sqlc to reduce boilerplate:
err = dbload.New(tx).
Add(dbload.Loader(&loadBundle.Cluster, req.ClusterID)).
Add(dbload.LoaderCustomID(&loadBundle.Provider,
req.ProviderID)).
Add(dbload.LoaderCustomID(&loadBundle.Plan,
dbsqlc.ProviderAndPlan(req.ProviderID, req.PlanID))).
Add(dbload.LoaderCustomID(&loadBundle.Region,
dbsqlc.ProviderAndRegion(req.ProviderID, req.RegionID))).
// must be loaded after cluster
Add(dbload.LoaderFunc(&loadBundle.PostgresVersion, func() *uuid.UUID {
return &loadBundle.Cluster.PostgresVersionID
})).
Add(dbload.LoaderFunc(&loadBundle.Team, func() *uuid.UUID {
return &loadBundle.Cluster.TeamID
})).
Load(ctx)
if err != nil {
return nil, err
}
But so far it’s nowhere near enough – this one makes point loading by IDs a lot more succinct, but doesn’t handle anything less trivial like associated objects or one-to-many relationships.
I’m fairly sure that there’s nothing approaching an answer to this problem in the Go ecosystem. But aren’t there millions of lines of production Go out there by now? How aren’t more people running into this?
An article from 37 Signals on cloud spend.
They’re concerned enough about their AWS spend that they’re moving to their own hardware, a path paved before them by the likes of Dropbox and GitHub. It’s also always interesting when companies are transparent about their cloud spend. Hey (their email service) for example, costs $89k/month or $1.1M/year in AWS, with the biggest component being RDS, which eats a quarter of that.
The AWS bill is a creeping concern that tends to trend slowly upward over time without anyone really noticing, then suddenly jumps out of a bush to knife you in jugular. It in the beginning it’s “wow, look at all this stuff we’re getting for a few bucks a month”. Then a few years later it’s “what?! how many f* millions?? holy shit!!” It’s amazing how much money can be spent at $0.023 a gigabyte. At Heroku and Stripe we were eventually forced to engage in major AWS cost reduction projects, and in both cases the engineers involved ended up paying their own salaries many times over.
[3 months since day zero. The day Elon Musk bought Twitter, and the world ended.]
Know anybody who loudly quit “the bird site” in anger to go to Mastodon, and has stuck with it?
Me neither.
An essay: There’s no planet B, making the case that making another planet like Mars fit humanity is going to be so impossibly hard that it’s not going to happen, and by extension that the bulk of our energies should be focused on planet A, the one we’re already living on.
I’m reminded of Christopher Nolan’s Interstellar, in which he makes an effort to keep the movie relatively grounded, famously bringing on physicists as consultants, and even using contemporary shuttles and rockets in some parts. But even this “grounded” movie necessitates the sudden appearance of a nearby wormhole, and the invention of anti-gravity technology for the plot to work.
As cool as SpaceX is, it’s not even one percent of one percent of what humanity needs to colonize another planet, and appears to be right up against the envelope in terms of scale we’re able to tackle. Planet B isn’t happening, but its hope is the policy equivalent of deus ex machina, letting us rationalize current direction under the premise of a nebulous future savior.
Twitter’s holding an auction for assorted office furnishings that closes tomorrow at 10 AM Pacific.
Mildly entertaining: many items from Ohio Designs, a boutique San Francisco manufacturer, and the same one that we used to buy from at old Heroku to add to our office’s raw “loft” look. Disgustingly overpriced, but some of the most solid furniture ever created, with heavy steel frames that I wouldn’t bet against surviving a nuclear blast. And by extension, also impossible to move/lift. Their HQ are next door to Southern Pacific Brewing Company on Treat. I know that because at one point I became obsessed with these things and went over to inquire buying one for home, before settling on an alternative at CB2 with similar build quality at a quarter of the price.
A Twitter bird statue and Twitter neon light are sitting at $18k and $20k respectively, 10x their worth, and probably < 1x what Twitter paid for them. A far shot from $44B, but a tiny consolation prize, and notable for their part in the symbolism of selling off the worst excesses of the old era of tech.
In the off chance this helps someone else: I started building my own ImageMagick after the project changed their static binary to an AppImage, which doesn’t work very well.
As with all programs from that age, every feature is configurable, and it’s non-trivial to get it compiled with support for more unusual formats like HEIC. I thought I’d succeeded, but have for months been using a build that couldn’t read HEIC, and it’d only been working by virtue of me not uploading a HEIC in a while.
My ./configure
invocation looked right:
./configure --with-heic=yes --with-webp=yes ...
But it’d been silently failing:
Delegate Library Configuration:
...
Ghostscript lib --with-gslib=no no
Graphviz --with-gvc=yes no
HEIC --with-heic=yes no
...
Yes to HEIC, but actually … no
.
There’s GitHub issues around with the same problem. It can have a variety of causes, but probably means that configure couldn’t load one of the dependencies for HEIC properly (libde265
and libheif
).
ImageMagick’s build process silently swallows problems, so it’s necessary to go to configure.log
to find out what happened. Digging into mine, I found that I was missing another dependency called dav1d
, which turns out to be an AV1 decoder from the VideoLAN project.
configure:31871: $PKG_CONFIG --exists --print-errors "libheif >= 1.4.0"
Package dav1d was not found in the pkg-config search path.
Perhaps you should add the directory containing `dav1d.pc'
to the PKG_CONFIG_PATH environment variable
I fixed the problem with:
apt-get install libdav1d-dev
The FAA’s recent trouble with NOTAM caught my eye for a couple reasons:
It happened 12 hours after my flight out of Calgary. I hadn’t been too happy about our 4 hour delay, but it goes to show that things can always be worse.
FAA traced the outage to a damaged database file. Mongo, is that you?
I’m still holding out hope for a full Silicon Valley-style outage postmorterm, but from what we can glean, it sounds less exciting than you might hope. “Database” can mean a lot of things, and from the sound of it, this is more like a flat file.
A finding from yesterday was that the file had been corrupted by manual manipulation by a pair of contractors. There were supposed to be safeguards to prevent that from happening, but they didn’t. We also found out that in line with long tradition in operations, there had been backups, but when brought online, the backups were also found to be corrupted. Absolutely classic.
I watched Black Adam (2022). I don’t know why – mild curiosity? Masochism? Latent Dwayne Johnson fandom?
Let’s start with character vitals:
It’s never clear what exactly the conflict is or why the viewer is supposed to care. As Black Adam wakes, a group of superheroes is sent to confront him, but it’s never in question that they’re on the same side, and they only fight it out to prove the narcissism of small differences. Later there’s a token bad guy, and as you might be able to imagine, it’s a huge mystery as to whether the combined might of Black Adam and his new friends at the Justice Society will be enough to take him down.
There was a long period of MCU golden age over the last ten years where I thought that Hollywood had figured it out – the perfect formula of fan service, special effects, and quippy dialog that was reliably reproducible to make a film that fans liked and would land a solid ~8 on IMDB. Apparently not, or the at least the DC studios never got the memo.
Don’t worry, this hasn’t become a movie blog. Tomorrow, back to your regularly scheduled programming.
A light movie recommendation: Triangle of Sadness (2022).
I didn’t know much of anything about it going in beyond “a couple, both young models, go on a cruise with wealthy elites”, which is the exactly the right amount to know about it. The movie takes some major turns throughout, and it’s very enjoyable if you don’t know they’re coming at all (I didn’t, but have since read some synopses that would’ve spoiled them). It’s a bit artsy and a black comedy, so it won’t be everyone’s cup of tea, but it punches well above its IMDB 7.5 and distantly above its 63% RT (can anyone trust film critics anymore?).
This one’s a few years old, but a nice essay on Where to live, a question I’m struggling a lot with right now (and for the last five years). A stranger will never be able to answer something so important for you, but while searching for insight, cast your net wide.
Anyone remotely adjacent to the tech industry will know about the Sam Bankman-Fried saga by now: the collapse of FTX as the biggest Ponzi scheme of all time, subsequent world podcasting tour by Sam to broadcast his innocence, arrest in the Bahamas, extradition to the US, release on $250M bail, and return to his parents’ home in Stanford, back on Twitter and League of Legends until October.
Consuming pre-exposure material on Sam qualifies as its own genre of entertaining dark comedy. The broad themes are always the same – hosts laud his genius, ask no real questions, and make dozens of grand assertions with absolute confidence.
Highlights of Acquired’s FTX episode:
In the hosts’ own words, FTX’s 10x growth in valuation for 2021 was quote “unbelievable”. Literally correct.
Sam is not shy about congratulating his own brilliance, but the hosts do most of the work for him, unprompted. He’s compared to Jeff Bezos 3-4 times, and not in succession. The comparison is resurrected again and again to drive it home.
A hearty laugh is shared as Sam jokes “all you have to do to beat Mt. Gox is not lose everyone’s money”. So on the nose.
He was thinking about leaving his suit at his brother’s in DC because except for wearing it for congress, he didn’t know when he’d use it again. Good thing he didn’t, having since gotten good mileage out of it in the Bahamas and NYC.
The greatest moment is when Sam is bestowed the appellation “the zeitgeist artist of our time”, meaning that he understands today’s confluence of culture and technology better than anyone else on Earth.
See also Sam on Unusual Whales, post-exposure. He’s typically evasive, but the last 13 minutes (after Sam leaves) is frustrated editorial from panelists, and excellent.
This is simply excellent: Production Twitter on One Machine? 100Gbps NICs and NVMe are fast.
The article isn’t suggesting that Twitter actually do this, but explores how possible it’d be to run Twitter on one really big server. It shows its work, and contains dozens of Fermi estimates based on best available public data:
Now we can use this to compute some sizes for both historical storage and a hot set using fixed-size data structures in a cache:
tweet avg size = tweet content avg size + metadata size => 176 byte tweet storage rate = avg tweet rate * tweet avg size in GB/day => 88 GB/day tweet storage rate * 1 year in TB => 32.1413 TB tweet content fixed size = 284 byte tweet cache rate = (tweet fixed size + metadata size) * max sustained rate in GB/day => 251.7647 GB/day
Doesn’t every programmer secretly love the idea of a mainframe? One giant machine that runs everything and has its own redundancy internally. Ensuring scalability through processes designed to be run in parallel is obviously more practical and more robust nowadays, but if you were to try and run Twitter on one machine, you might be able to get results that aren’t too much worse, and with 1000x less infrastructure.
It wasn’t too long ago we were really trying to do this. At iStock circa 2011 where the ops team was running the asylum, right around the end of my tenure we purchased a huge mainframe-esque box that advertised being able to stay online even if one of its CPUs failed. I was never sworn in on how much it cost, but undoubtedly the price tag would’ve made my eyes bleed. That was right around the period where misguided attempts to scale vertically and racking your own specialized hardware was already starting to look pretty silly, so I’m curious in retrospect how far it made it into production.
Published sequence 035.
This turned out to be a bit of a yakshake as I found out that despite a lot of work adding support to this site for various types of image formats over the years, it still wasn’t well set up for .heic
s that come out of an iPhone. ImageMagick resizes them, but browser support sits comfortably at 0% so it’s not a format you want to use on the web. I added some code to the build process convert them to WebP (96% support), but that in turn required sizable refactoring to support outputs with an extension different than their input.
I should investigate using WebP as a general target regardless of the input format since it overall seems like a better default that produces better results by default compared to mucking around with JPGs and PNGs. But it’s not super pressing because although WebPs will produce considerably better compression than most JPG exports, they’re not too much better than a JPG that’s been optimized via MozJPEG, which this site does.
Published Content authored by ChatGPT front pages, wherein we see empirical proof that an article generated by ChatGPT is good enough to get all the way to the front page of Hacker News before anyone notices.
A guilty pleasure is reading other peoples’ multi-thousand word reviews on Leica cameras. From today: Fujifilm to Leica which goes into detail in rationalizing buying an M11.
I hate that I love these cameras. At Heroku we had a few vocal Leica users. One went as far as to sell all his equipment to simplify his life on a single Leica rangefinder and lens (an M9). But like the Leica brand itself, the move wasn’t about practicality or technical superiority, it was about romance. Leica doesn’t sell cameras, it sells lifestyles, and fans do 80% of the work for them. See also sh*t Leica photographers say.
Years later I’d pick up a Q, an amazing camera. The best that Leica’s ever produced. But once you’ve dipped your toes in, you’re forever haunted by the siren call of the M-series.
M cameras are a rip off ($9k for an M11 body + $6k for a lens). Everybody knows they’re a rip off. No honest person even tries to argue they’re not a rip off. But they’re well-designed, and again, supremely romantic.
I check in every so often, and luckily, have held the line. Besides the price tag:
Rangefinders were always questionable, but EVFs made them obsolete. This might be debatable, except we have proof that Leica knows it too. The camera doesn’t come with an EVF, but Leica won’t hesitate to sell you one. For $750.
$15k. Not weather sealed.
Questionable software. Slow start up time.
On the plus side, the M11 did away with the bottom plate, a feature that’d been left in until the M10 for historical reasons, but completely impractical in every way. You can’t beat full frame in a body that size, or the minimalist controls.
Leica must send $100M/yr Fuji’s way to keep cameras on APS-C and with at least six unnecessary dials each. It’s the only explanation for such magnanimity in staying out of the competition.
The most common event in our times: An expert with power/authority/influence is wrong in full view, yet suffers no consequences.
Contrary to popular belief, it doesn’t rain that much in San Francisco. A few dozen times a year it does. Very occasionally, it rains an unusual amount. Today it rained 0.83”, which is kind of a lot, I guess.
Local media and Twitterati (now newly christened armchair meteorologists), not ones to let a kinda-maybe-almost-but-not-really catastrophe go to waste, dramatically dubbed this THE BOMB CYCLONE. In a flashback to her greatest hits of 2020, London Breed heroically ordered residents to stay home.
By comparison, during an actual adverse weather event like hurricane Ian in Florida last year, the state saw rainfalls of 10 to 20” over a four-day period, ~3 to 6x daily what SF saw today. Tokyo’s rainiest month of October averages 9.24” over ~12 days of rain. In other words, the poor Japanese are dealing with twelve THE BOMB CYCLONES just in the month of October, every year.
Gap puts their corporate HQ in San Francisco up for lease or sale, notable due to the size of the lease (162k sqft), but also because it’s historically one of the city’s preeminent companies, founded, headquartered, and proudly operating out of San Francisco.
Only months ago San Francisco’s pundit class decried “misinformation” that Gap had shuttered operations in SF, despite having clearly closed their flagship store on Powell (which remains an iconic SF empty storefront to this day) and corporate office for Old Navy. Gap, they said, is still very much in the city, you stupid conspiracy theorists. But as the facts become indisputable, the strategy shifts from attack to denial – none of this is happening. Downvote, misdirect, ignore.
Gap’s departure is the latest blow in a long series of insults to San Francisco’s bottom line. Office vacancy is at a 30-year high, large real estate holdings are requesting reassessments to reflect decreased value, and even other SF darlings continue to flee (Salesforce announced a 10% layoff round this morning along with further office space reduction).
The deficit, currently a mere $728M, grows.
I’m trying to read James Joyce’s masterpiece Ulysses, again, fast becoming an annual tradition in humiliation and failure. I’ve tackled my fair share of literature – Dostoevsky, Dickens, Kafka, Vonnegut, Hemingway – and for my money, Joyce and Ulysses are on a level of their own, with allusions, puns, and classical references so rich they’ll leave you reeling, and wordplay so clever that it’s a wonder a human could write it.
But I’ve never in my life come across a book that’s so thoroughly and so totally unreadable. The prose is heavier than a pallet of bricks. A few years ago I tried to cheat my way to success by getting it in audiobook form, only to be foiled by the narrator’s dramatic Dubliner accent, which while appropriate, made things even worse. I gave up after less than an hour.
Back to written form, and 10% through. I’ll make it, some day.
English language Stack Exchange bans ChatGPT-generated answers, taking a step beyond what the network as a whole has committed to.
For years the concerns from public intellectuals (few of whom are AI or computer specialists) over AGI have felt overblown to me, but ChatGPT’s got me officially worried. Not that AGI’s going to take over the world and enslave us all, but rather that we’re about to smash headlong into a creative wall in communication and writing as ChatGPT becomes the perfect remixer, able to endlessly recycle new permutations of the existing bulk of human output in a way that’s convincing enough for us to accept. Never able to produce anything interesting or new, but with few people who notice since we spend our days consuming human-regurgitated remixes anyway.
I’m expecting more announcements like Stack Exchange’s over the coming months. Discerning humans can probably distinguish ChatGPT output for their chosen field, for now, but it’s certain to make problems in everyday life. For example, search engine spammers taking their game to a new level with AI-produced filler far more convincing than the Markhov Chain or plagiarism techniques they use today. Firms using it for frontline support to lead customers in endless rhetorical circles even more frustrating than the precanned answers and telephone mazes of today. Phishing seniors via email with previously unimaginable sophistication.
You can’t put this genie back in the bottle. If we don’t make it, China will. Regulation’s a tempting idea, but lawmakers, barely able to sound even partially articulate in hearings over comparatively simple issues like social media, are wholly unequipped to tackle such a complex subject. The only practical answer is supremely unsatisfying: wait, and see.
Bloomberg reports how Shopify is canceling recurring meetings with more than two people to kick off 2023.
Tobi’s taking a lot of flak over it from middle managers on HN (my favorite from the high drama category: “Tobi has done this a couple times […] it comes from a place of narcissism”). I think it’s a great idea. In the early 2010s at Heroku I’d started to think that at least as far as tech was concerned, meetings were a solved problem. It was an engineering-driven culture, and most engineers hated them, so we’d schedule as few as possible, with only a few key ones per week. But then I got to Stripe and realized that I’d been living in a bubble – an uncomfortable truth of the universe is that most people love meetings, even if they think they don’t, and doubly so for well-pedigreed Stanford types most likely to work at top Silicon Valley firms. Far from being an element of change, tech was only the latest continuation of a long tradition in big American business. Every satirical pane from 90s Dilbert cartoons held perfectly true.
Based on testimony from Shopify employees, Bloomberg gets a few points wrong:
Meeting cancellations aren’t permanent. The move wipes the slate clean, but important meetings can be put back on the docket.
It’s not the first time Shopify’s done it. One person says that these events were originally aimed to occur every rand(600) + 300
days.
Targeting recurring meetings specifically is the right move. These get slapped onto calendars, often for large groups, and their tendency is to stay after their inertia’s built. Before long no one can even remember a time without them, and it’d be heresy to challenge their existence. Ending recurring meetings by default shifts the burden of proof to the right place – it’s now on middle management to re-justify their existence rather than falling to a maverick engineer who has to burn social capital in a risky attempt to argue for their discontinuation.
Occasionally, especially when thinking about past around the holiday season, I’ll use the Way Back Machine to take a look at sites which I used to consider the absolute pinnacle of great web design back in the day. Today’s specimen is A List Apart, a site that writes about web design. Screenshot of a 2009 capture below.
Some things that come to mind:
We all loved small fonts back then. The body text here is ~8pt Verdana which is a style I used all over the place back in the 2000s. I’d hazard a guess that the reason a lot of us liked them is that larger fonts didn’t look anywhere near as good as they do now, before the advent of subpixel anti-aliasing and retina displays.
A key element that makes the web today look as good as it does are fonts, which render so beautifully nowadays that it makes the web’s default look quite good, even where minimal design has been applied. Old sites should benefit from these advancements, but it never seems to translate through. Their fonts don’t look great and it makes the whole site not look great.
The broad strokes of this old layout are excellent, and better than alistapart.com’s current site. The specifics are a bit dated and it needs niceties like responsiveness, but a little modernization would easily bring it up to date. The current era of web design is too quick to remove any and every extraneous element, and although the baseline product is decent, 2022’s internet looks uniform and somewhat bland.
Notice all the text in images? (e.g. in the logo, “An event apart”, “A book apart”, etc.) The reason people did that is that it looked better than what the web browser could render at the time. Programs like Photoshop would yield additional anti-aliasing beyond the browser’s discrete, blocky pixels. It’s ironic that transported through time, text that was put in images to look better now looks much worse.
When all you have is a hammer (access to free Heroku apps), everything looks like a nail (dynamic apps + buildpacks for everything). On the eve of November 28’s Herokupocalypse, I pulled the static rip cord and dropped the dynamic component from apps that should’ve been static HTML pages from the get go. The totality of tooling required: Chrome and a text editor.
See also Nanoglyph 022: Time and Entropy.
Happy new year!
I always love these first few days of a new year. A fresh slate, after a week off, batteries recharged, and optimism for the year ahead. I’ve spent the last few days doing little else beyond writing, some casual coding, and running. It’s been great. Despite some of the most uncertain times in which I’ve ever lived, I’m feeling a lot better about 2023 than 2022.
Wrote a few notes on publishing iOS live photos, involving exporting a .mov
from iOS and running it through ffmpeg to fix aspect ratio, scale, strip audio, and convert to web-friendly codecs. Taking live photos is a nice alternative to shooting in video mode because you can have them captured by default, so there’s a lot less mucking around with start/stop.
A major takeaway: you can make a <video>
behave like a classic animated GIF by having it autoplay
and loop
, but only by also including the muted
option, which browsers require before allowing autoplay to prevent us all from losing our minds.
<video autoplay loop muted playsinline>
<source src="/videos/girm632/cougar-1.mp4" type="video/mp4">
<source src="/videos/girm632/cougar-1.webm" type="video/webm">
</video>
playsinline
is another magic Apple token that tells an iPhone that it’s okay to autoplay a video.
Eugyppius has a way with words:
The pre-pandemic world is gone forever. If the past year has taught us nothing else, it has taught us that much. Mass containment has permanently transformed our societies and our cultures. It has cemented the cooperative relationship between the regime and the press, and it has changed the content and the tenor of our media. Drama and panic have always sold newspapers, but our new era is characterised by an unending self-reinforcing cyclone of hyperventilation journalism, the likes of which we’ve never seen before. For the foreseeable future, I think, we will careen from one crisis to the next.
Truer words have rarely been spoken. The whole edition is well worth reading.
The Calgary Zoo yesterday, somewhere I haven’t been in years. The grounds feel smaller now, but the animals are impressive, and we got lucky with many out in the open. Amongst others:
Although not as cold as last week (-30C), it was chilly out (-3C), and I’ve never been so thankful to for the butterfly conservatory’s hot, humid interior (currently minus the butterflies for the winter season) for recharging body temperature.
I played around with exporting some iOS live photos and running them through ffmpeg to make some VP9/.webm
shorts. It worked reasonably well, despite the idiosyncrasies of the <video>
tag.
Follow up from yesterday’s note on atom reslugging: Short, friendly base32 slugs from timestamps, discussing how unique slugs are generated for each atom on this page.
A published timestamp is changed to bytes then encoded to base32 similar to RFC 4648, except with digits leading so that output slugs sort in the same order as the input timestamps. Implemented in ~4 lines of Go:
var lexicographicBase32 = "234567abcdefghijklmnopqrstuvwxyz"
var lexicographicBase32Encoding = base32.
NewEncoding(lexicographicBase32).
WithPadding(base32.NoPadding)
func atomSlug(publishedAt time.Time) string {
i := big.NewInt(publishedAt.Unix())
return lexicographicBase32Encoding.EncodeToString(i.Bytes())
}
In the unlikely chance that you happen to be following this in an RSS reader, you just got dump of every atom written so far all over again. Sorry about that. It happened because I “reslugged” (assigned a new URL-friendly identifier) them all, making your reader think each was new content. I normally wouldn’t make a breaking change like that, but there was an improvement opportunity I couldn’t resist, and the project is only a few days old, so I went for it. It’ll be the last time something like that happens.
Each atom’s slug is its published timestamp converted to bytes and base32-encoded (e.g. this atom’s is giqmrd2
). I changed the encoding character set from the normal abcdefghijklmnopqrstuvwxyz234567
to my own numbers-first variant of 234567abcdefghijklmnopqrstuvwxyz
. The reason I did that is so base32-encoded slugs always sort lexicographically the same as their source timestamps, which is quite a convenient property. More on this in a short piece tomorrow.
I updated my now page for the first time in almost three years. It may have been an awful couple years, but at least a great week for this site?
Write as often as possible, not with the idea at once of getting into print, but as if you were learning an instrument.
— J.B. Priestley
My prose output in 2022 was way down (not counting commit messages and Reddit comments), and I feel this viscerally. The less you do it, the harder it gets, leading to a vicious negative feedback loop. Come 2023, I want to invert that cycle. Write more and more often, and gods willing, make it better, and easier.
Published fragment: Easy, alternative soft deletion: deleted_record_insert
.
We’ve switched away from traditional soft deletion using a deleted_at
column which is wound into every query like deleted_at IS NULL
to one that uses a schemaless deleted_record
table that’s still useful for debugging, but doesn’t need to be threaded in throughout production code, and razes a whole class of bugs involving forgotten deleted_at IS NULL
predicates or foreign key problems (see Soft deletion probably isn’t worth it).
I recommend the use of a generic insertion function:
CREATE FUNCTION deleted_record_insert() RETURNS trigger
LANGUAGE plpgsql
AS $$
BEGIN
EXECUTE 'INSERT INTO deleted_record
(data, object_id, table_name)
VALUES
($1, $2, $3)'
USING to_jsonb(OLD.*), OLD.id, TG_TABLE_NAME;
RETURN OLD;
END;
$$;
Which can then be used as an AFTER DELETE
trigger on any table:
CREATE TRIGGER deleted_record_insert AFTER DELETE ON invoice
FOR EACH ROW EXECUTE FUNCTION deleted_record_insert();
Our results have been shockingly good. No bugs to speak of, no reduction in operational visibility, and less friction in writing new code and analytics.
I wrote some meta-commentary about this Atoms list in a fragment: The Unpursuit of clout.
The broad thesis is that publishing here is a little harder than publishing on Twitter (it involves a macro to insert an entry in a TOML file and doing a git push
), but over the last few days which are this project’s lifespan to date, I’ve been finding it easier. With no favs, likes, subtweets, or comments, you’re not performing for anybody, and not gaming any metrics. Just write, publish.
As is traditional, Ruby 3.2 was released on Christmas day. The big news is that Shopify’s YJIT (Yet Another Ruby JIT) engine is out of experimental status and now considered production ready, having been battle-tested at Shopify for a year. As CEO Tobi notes:
Very good chance that YJIT is now running more ruby net code than any other VM. Shopify storefronts are a sizeable percentage of all web traffic!
Benchmarks show a performance improvement of ~40% over CRuby, so it’s exciting for all Ruby users, including us. I can’t imagine not looking into switching to JYIT in the new year.
Alongside Sorbet, Stripe was also working on a JIT, but with YJIT mainline, I think it’s safe to say that theirs stays Stripeware (if it’s even still being developed). Shopify’s demonstrated a healthier model for working with open-source projects – by maintaining close connection with the core teams (including Rails as well), their work goes upstream, and comes under the umbrella of collective maintenance. Stripe’s projects are in danger of ending up more like Facebook’s Hack), diverging far enough from the trunk to become a separate ecosystem.
Also nice to see is that YJIT is written in Rust, after a successful port in April. This in opposition to Stripe’s commitment to a C++ toolchain, and likely to keep the project more maintainable and more sustainable (and hopefully extend these properties to Ruby itself as it makes inroads to core, which is heavy C).
I wrote a mini-review for The Way of Water, the latest in the Avatar franchise. Go for the cutting edge CG of blue aliens and Pandora whales, stay for the 22nd century Spruce Goose-esque flying boat, crab mechs, and v2.0 exoskeletons.
The latest Twitter files by way of David Zweig, on the suppression of Covid information (and actually, read this long form version instead).
A few days ago Elon Musk appeared on All-in, and put it best: “all the conspiracy theories about Twitter turned out to be true”. It was apparent to anyone paying attention that Twitter was censoring true statements inconvenient to the story being told by Fauci and the White House, despite vehement denial from all parties involved. As with previous iterations, the Twitter files aren’t about knowing a murder had taken place, but rather about finding the smoking gun with fingerprints intact.
A thought-provoking interview with John Mearsheimer, a decorated political scientist, and rare dissenting voice on the war in Ukraine. He believes there’s an imminent danger of escalation, and makes the case for suing for diplomatic resolution.
In San Francisco, properties assessed at a total of $59 billion have requested resassessment in 2022, having correctly recognized that owning downtown isn’t as valuable of a proposition as it was in 2020. Adjustment would bring the total down to $26 billion, translating to $308 million in lost property tax revenue should all their reassessments succeed. Add that to the already identified $728 million hole in the budget over the next two years.
Claimed by the article: San Francisco is slow to return to work due to a high concentration of “tech and professional services”.
Omitted from the article: after spending a decade spent increasing tax and slathering on red tape, San Francisco locks down earliest in the nation, and for the longest, keeping major restrictions in place until just a few months ago. Add decades worth of obstructionism having contributed to some of the priciest rents and real estate on the planet, in a major surprise young workers didn’t find it a compelling place to hunker down for three years of life in stasis, and businesses didn’t find it compelling to sit on three years worth of empty offices.
The good news: San Franciscans achieved their final victory in driving out The Bad People. The bad: rents are as high as ever, the budget is a smoking crater in the ground, and the 70s didn’t come back.
The other day I found that my automated job to cross-post to a toy Spring ‘83 board started failing, with the reason being that my last sequences entry was more than 22 days ago. Not good – I’d intended to keep them more up-to-date than that – but I don’t always have anything new to post.
I’ve always like the idea of .plan
files, tiny plaintext files that live in a user’s home directory and which would be dumped using the finger
command on a target user. Famously, John Carmack published them for more than a decade. Here, in a similar spirit to .plan
, introduce “atoms”, tiny multimedia particles that are fast and easy to write. Along with sequence entries, they’re cross-posted to Spring ‘83, and will also live at /atoms
.
Xmas morning. A signature gift was a new Zojirushi rice cooker, so I’ll graduating from cooking rice in my old steamer (even if you don’t recognize the name, their elephant logo is conspicuous worldwide).
Zojirushi reminds me of the old Heroku office at 321 11th St, where we had a number of their vacuum boilers. They’d keep water at perfect coffee-brewing temperature, ready to make a Chemex pourover at any moment.