DialFunc
and net.Conn
, on using DialFunc
to return a minimal stub for net.Conn
that can simulate hard-to-reproduce conditions like an error on Close
.
type connStub struct {
net.Conn
closeFunc func() error
}
func newConnStub(conn net.Conn) *connStub {
return &connStub{
Conn: conn,
closeFunc: conn.Close,
}
}
func (c *connStub) Close() error {
return c.closeFunc()
}
]]>riverpgxv5
’s Listener
where it wouldn’t unset an internal connection if Close
returned an error, making the listener not reusable.We’ve ran some open ended tests with continuous job insertion and work over long periods. Its memory use is stable, so we think those were the only leaks in there (at least for now).
]]>PREPARE add_flag_to_team(text, uuid) AS INSERT INTO flag_team (
flag_id,
team_id
) VALUES (
(SELECT id FROM flag WHERE name = $1),
$2
) ON CONFLICT DO NOTHING;
EXECUTE add_flag_to_team('<flag_name>', '<team_id>');
]]>The total cost of hosting this site for January: $3.08. That doesn’t seem like a bad deal, but digging in a little, it turned out I was overpaying.
Of $3.08, $3.07 was for S3 (I was mildly surprised to see that all my CloudFront use fits in the free tier). And of that, $2.15 was for PUT
, COPY
, POST
, or LIST
requests on this site’s bucket. The GET
s used to actually serve the site are cheaper, and added up to only $0.39.
The S3 lists and mutations are generated from the build process, which from a GitHub Action syncs the built product with S3. The majority of builds are automated on cron – some parts of the site like reading or twitter ingested data from the Goodreads and Twitter APIs, so I’d had the site building every three hours to pick up changes.
But over the last year, both those APIs have experienced unceremonious deaths, reducing the dynamic content on this site to zero. All relevant changes are now pushed through Git, leaving the cron schedule a vestige of better times.
I reduced the cron frequency from three hours to three days (still a good idea to check periodically that the build still works), which should have the effect of bringing that $2.15 down by an order of magnitude.
Saving $2 this way was certainly not worth the time (that’s about 1/3rd of a single San Francisco coffee these days), but hey, it’s fun.
]]>go
command has a bundled toolchain, but is capable of fetching other ones as necessary.
Today, to upgrade my project to Go 1.22, all I had to do was change one line in go.mod
:
$ git diff
diff --git a/go.mod b/go.mod
index 49a960839..f1b3ff857 100644
--- a/go.mod
+++ b/go.mod
@@ -11,7 +11,7 @@
module github.com/crunchydata/priv-all-platform
-go 1.21
+go 1.22.0
The next Go command I ran detected the absence of Go 1.22, and downloaded it, producing the most streamlined upgrade path of all time.
$ go version
go version go1.22.0 darwin/arm64
]]>typing is the secret to using the computer. if you’re not typing, you should be clicking on stuff. if you’re scrolling, then it’s already over… you’re not doing shit
A little brash on the surface, but a font of wisdom below. If you’re on a computer scrolling, nothing useful is happening. Best to stop using said computer and go do something better.
]]>I started a daily running streak going into France, and hit 163 consecutive days before it ended with my trip to the John Muir Trail. This was a great habit – wanting to keep the streak going, I’d be out there every day rain or shine, even when I hated it. There was a moment where I arrived in Berlin late in the evening, exhausted from three days of long distance travel, but still, like every other day, dragged myself to the door, put on my running shoes, and ran around in the dark until I hit my 5 km minimum. Running is such a great way to see Europe. By the end of my stay in any place – Paris, London, Bath, York, Berlin – I’d have multiple routes figured out and strong opinions on which were the best. Europe’s a crowded place, so I’d wake up as early as I could, often starting before the sun was out. Getting my run done early also improved the chances that it’d get done at all.
Takeaways for 2024: Habit streaks work, early is better than late, and without nutrition, fitness is only half of the whole.
]]>Wal2 fixes the problem by juggling two wal files which its writer and checkpointer alternate between:
]]>In wal2 mode, the system uses two wal files instead of one. The files are named “[database]-wal” and “[database]-wal2”, where “[database]” is of course the name of the database file. When data is written to the database, the writer begins by appending the new data to the first wal file. Once the first wal file has grown large enough, writers switch to appending data to the second wal file. At this point the first wal file can be checkpointed (after which it can be overwritten). Then, once the second wal file has grown large enough and the first wal file has been checkpointed, writers switch back to the first wal file. And so on.
]]>When a measure becomes a target, it ceases to be a good measure.
My book list is now a flat file in TOML instead. More robust, but it’s a shame how one by one, every third party API this site used to ingest for aggregation has disappeared – Goodreads was the last one standing, and it’s gone.
]]>It’s not documented, but even with an existing API key, somewhere around mid-2022 they crippled the API so that it no longer returns many properties about books being returned – e.g. publication year or number of pages. It’s still possible to extract reviews, but that’s about it.
I recently restyled this site’s /reading section which had still been syncing from Goodreads (although I’m about a year behind on reviews). The API still technically works, but given the certainty of deprecation and truncated functionality, it seems about as good of a time as any to move back to a flat file.
]]>