Content authored by ChatGPT front pages

Jan 7, 2023

An article written by ChatGPT hit the front page of Hacker News this morning. Maybe not the first time it happened, but definitive, and only the beginning of what’s surely going to be a continuing trend.

Timeline:

  • “The creator economy: the top 1% and everyone else”, front pages. It’s a subject many HNers are interested in, and produces a healthy comment thread.

  • Someone points out that the prose seems stilted and repetitive. Oddly AI-like.

  • Another user runs it through an OpenAI detector and finds it indeed looks fake. 1

  • The “author” admits it was generated by ChatGPT, and not mired by anything so inconvenient as a personal moral framework, doesn’t see what the big problem is.

  • There’s enough uncertainty (and probably manual flagging) that the thread dies around 50 points.

You could interpret this result in different ways depending on whether you’re an optimistic or not:

  • The deep fake was detected. The system works! We’re going to be okay.

  • The AI content was uncovered, but barely, and only thanks to the kind of extremely astute reader that you find on HN and basically nowhere else. Even the most trivial of effort on the progenitor’s part to do some manual editing, add links/citations/data, or remove obvious AI-isms might’ve allowed it to coast by unnoticed.

Personally, I think we’re standing at the threshold of a brave new world, and it’s not pretty. We’re about to be drowning in this stuff.

A question: should civilized society have a convention for responsible disclosure when using a tool like ChatGPT to aid content creation? You don’t disclose that you used a word processor to correct grammar, but it feels like a different animal when the tools themselves are doing the writing.

1 Although I tried running the content through an OpenAI detector, I couldn’t reproduce the fake result.

Did I make a mistake? Please consider sending a pull request.