brandur.org

Atoms

Multimedia particles in the style of a tweet, also serving as a changelog to consolidate changes elsewhere in the site. Cross-posted to an atom feed. Frequently off topic.

I’ve updated The Two-phase Data Load and Render Pattern in Go after Roman pointed out that if we swap the position of two generic parameters in Render, another generic parameter can be inferred, and every invocation gets considerably cleaner.

Previously, Render looked like this:

func Render[TLoadBundle any, TRenderable Renderable[TLoadBundle, TModel, TRenderable], TModel any](
    ctx context.Context, e db.Executor, baseParams *pbaseparam.BaseParams, model TModel,
) (TRenderable, error)

And was invoked like:

resource, err := apiresource.Render[*apiresourcekind.ProductLoadBundle, *apiresourcekind.Product](
  ctx, tx, svc.BaseParams, product
)

In the updated version, the positions of the first two generic parameters are swapped:

func Render[TRenderable Renderable[TLoadBundle, TModel, TRenderable], TLoadBundle any, TModel any](
    ctx context.Context, e db.Executor, baseParams *pbaseparam.BaseParams, model TModel,
) (TRenderable, error) {

And the function can now be invoked like this:

resource, err := apiresource.Render[*apiresourcekind.Product](
  ctx, tx, svc.BaseParams, product
)

Much cleaner. A caller no longer even needs to know that the load bundle exists. At work I applied the fix to our hundreds of lines of existing calls, and the difference in readability is night and day.


River Python is shipped (with a huge assist from Eric Hauser, who contributed all the original code), enabling insertion of jobs in Python that will be worked in Go. It supports all the normal insert features including unique jobs and batch insertion, along with Python-specific stretch goals like exported type signatures, async I/O, and a @dataclass-friendly JobArgs protocol.

Here’s roughly what it looks like in action:

@dataclass
class SortArgs:
    strings: list[str]

    kind: str = "sort"

    def to_json(self) -> str:
        return json.dumps({"strings": self.strings})
        
engine = sqlalchemy.create_engine("postgresql://...")
client = riverqueue.Client(riversqlalchemy.Driver(engine))

insert_res = client.insert(
    SortArgs(strings=["whale", "tiger", "bear"]),
)

First contact

I’ve been playing around with SQLite the last couple of days. I thought I knew a little about SQLite, but I didn’t, and am getting my remedial education through an accelerated gauntlet. Some of what I’ve learned of its quirks has left me reeling.

Top surprises:

  • ALTER COLUMN is not supported. Official recommendation for changing a column? Make a new table.

  • DROP CONSTRAINT is not supported. Official recommendation for removing a constraint? Make a new table.

  • SQLite doesn’t have data types on columns. Data types (and there are only five) are on values only, so anything can go anywhere.

  • If you ask for a column with an unsupported/non-existent type, it happily does the wrong thing without warning or error. I was raising a schema like CREATE TABLE my_table (id bigserial, messages jsonb[]), which seemed to be working, so I mistakenly thought for the first day that SQLite supported serials and arrays. It does not.

  • You can use CREATE TABLE my_table (...) STRICT to only allow one of the five supported types: integer, real, text, blob, any.

  • There was a lot of recent fanfare about SQLite’s new support for jsonb. Unlike in Postgres, jsonb is not actually a data type, but rather a format that’s input and output to and from built-in jsonb* functions. When persisted, it’s one of the big five: blob.

  • Other fairly critical types are also missing. e.g. There’s nothing like timestamptz. If you want a date/time, you store is as a Unix timestamp integer or ISO8601-formatted string, and a number of built-in functions are provided to work with those.

SQLite has some impressive features around streaming which I’m looking forward to playing with, but the initial DX experience has certainly been a little jarring.

On off days, I sometimes wonder if I’m bought into some narratives too strongly. Like, is Postgres really the world’s best database? Experiences like this certainly cement my conviction. Yes, it is.


Updated now, on RBAC and Python.


From Mike Solana, We Are the Media Now, on his publication, Pirate Wires, and on a new media landscape where journalists do journalism, and don’t hate you:

Why are you sharing scoops with journalists who hate you? Do you not understand how this works? I’m realizing it’s possible you don’t understand how this works. New information is the lifeblood of a media company. When you share it with a hostile outlet, you are feeding them. When you withhold it from a value-aligned company, if even inadvertently, you are starving them.

Send us your scoops. Send us your stories. Are you enjoying tech’s vibe shift? It didn’t just happen. Support the team that doesn’t want you liquidated, or don’t complain about the hateful tech press when they’re at your door with pitchforks. Will I still defend you when you’re unjustly maligned?

Linked as supporting evidence is the latest hit piece on Alexander Wang for the unforgivable sin of daring to suggest that meritocracy is a good thing. This remedial second grade level writing was published in TechCrunch, by a person that mistakenly self identifies as a “journalist”:

I would invite him — and those supporting them — to fuck all the way off. You misunderstand me. You thought I wanted you to fuck only partially the way off. Please, read my lips. I was perfectly clear: Off you fuck. All the way. Remove head from ignorant ass, then fuck all the way off.

Yes, that’s really the quality of discourse coming out of legacy tech media. I’m with Mike—don’t give anything to the legacy media. The faster they crash to zero relevance (towards which they’re on a collision course already), the faster they cease to exist.


I ran a query today to delete two rows that clocked in at just under two hours. A new personal record for time for rows deleted – one hour per row.

=> DELETE FROM metric WHERE name = 'networkin' OR name = 'networkout';
DELETE 2
Time: 6957592.469 ms (01:55:57.592)

metric is a tiny table itself, but it’s referenced by large partitioned tables for metric_point and metric_point_aggregate, both of which need to scanned in their entirety to verify that no rows reference either of these two being removed. Luckily, the operation didn’t need to hold a significant lock, and inserts and reads on all tables were healthy throughout.


River now has a self-hosted web UI to help manage jobs and queues without having to drop down to a psql shell.

Paradoxically, it’s mostly written in TypeScript instead of Go, which is more of a testament to the state of Go’s templating system than anything else. It’s still distributed as a single binary thanks to go:embed.


Published fragment Sqlc 2024: check in, on some quick thoughts on whether sqlc is still the direction for Go projects now that we’ve been using it for three years.


I was amused to discover that Derek Sivers’ Nownownow.com also publishes its directory as a tab-delimited .txt file. I’ve had pretty good luck randomly finding interesting blogs to add to my reader from Now pages, and the plain text aspect makes it especially easy to search by city or country.


Published fragment Go: Don’t name packages common nouns, on avoiding naming Go packages after common nouns like rate or server so that they don’t clash with variable names, and how to find a more fitting name for them instead.



Published sequence 084, spherical abberation.


Published Eradicating N+1s: The Two-phase Data Load and Render Pattern, on using a two-phase data load and render pattern to prevent N+1 queries in a generalized way. Especially useful in Go, but applicable in any language.


Published fragment The romance of Europe, on a concert in Berlin, correcting for tourist bias, and how smartphones own the planet.


Published sequence 082, north of Warschauer.