“Async Reads” collects writing I find worth sharing, for one reason or another. An article being included here does not imply my endorsement (or lack thereof) of the author or their opinions. It only reflects a very broad, subjective measure of quality of the writing itself.

(1) “Polymaths are back from the dead”

Erik Hoel on the possible renaissance of polymathy, driven by generative models:

To put it simply: there are two kinds of thinkers. Those rate-limited by expertise, and those rate-limited by creativity. Slowly but consistently, the rate-limiting factor for intellectual contribution has become ever deeper expertise.

(…)

Now, I’m a known critic of AI. (…) But like with any new technology, you cannot be an honest critic if you cannot admit the positives. And I’ll admit that AI is a clear boon to polymaths and, more broadly, those more rate-limited by expertise than creativity. It favors the lone creators who have been, historically for decades now, buried amid collaborative teams. For this reason, I predict a new breed of polymaths who make use of AI to work across a far greater range than the previous generation (and specialists to be more individually productive).

(…)

Getting good at programming (which even at my peak wasn’t near professional level) was a trying inconvenience I had to overcome for what I actually wanted to do, which was science. If ChatGPT had been around when I was in graduate school, this barrier would have vanished in the puff of a $20 subscription, and I could have focused more on evaluation and coding tests—probably publishing twice as many papers. I had tons of ideas, and while I was indeed rate-limited by expertise, what was most annoying was that the lacking expertise wasn’t even in the domain that mattered.

(2) “Learn in Public”

My good colleague Heye Vöcking has some wise words to share about the positive feedback loop between learning in public and teaching others:

Traditional learning: You’re learning how LLMs choose the next token. You write an explanation on paper, pretending to teach it to an imaginary sixth-grader: “The AI looks at all possible words and picks the most likely one based on what it learned during training.” You realize you don’t understand the probability calculation, study more, and refine your explanation. The learning remains private.

Learning in public: You follow the same process but document and publish that explanation on your blog, explicitly noting your confusion about the probability calculation. Now the magic happens: a machine learning engineer comments with a clearer explanation, someone shares a helpful visualization, and a student asks a question that reveals another gap. Your learning becomes collaborative.

(…)

Each concept you explain becomes what researchers call a semantic node in a broader knowledge network. These artifacts must be machine-readable to maximize impact, just as search engines need to understand your content to rank it effectively, your learning artifacts need proper structure, clear terminology, and semantic markup (like a Wikipedia page the internet links to).

I encourage you to read the second part as well, expanding this concept with semantic markers and practices that enhance discoverability and shareability for humans and software systems.

(3) “Clawed - On Anthropic and the Department of War”

Dean W. Ball wrote a thorough piece on the potential death of the American Republic through the lens of the recent political attacks on Anthropic (for more context, read Anthropic’s take on the issue as well).

There is a lot of nuance in Dean’s writing, and all of it warrants a careful read:

At some point during my lifetime—I am not sure when—the American republic as we know it began to die. Like most natural deaths, the causes are numerous and interwoven. No one incident, emergency, attack, president, political party, law, idea, person, corporation, technology, mistake, betrayal, failure, misconception, or foreign adversary “caused” death to begin, though all those things and more contributed.

(…)

I am now going to write about a skirmish between an AI company and the U.S. government. I don’t want to sound hyperbolic about it. The death I am describing has been going on for most of my life. The incident I am going to write about now took place last week, and it may even be halfway satisfyingly resolved within a day.

(…)

Here are the facts as I understand them: during the Biden Administration, the AI company Anthropic negotiated a deal with the Department of Defense (now known as the Department of War, hereafter referred to as DoW) for the use of the AI system Claude in classified contexts.

(…)

Trump officials claim to have changed their mind not so much because they want to do mass surveillance on Americans or use autonomous lethal weapons imminently, but because they object altogether to the notion of privately imposed limitations on the military’s use of technology.

(…)

The Department of War’s rational response here would have been to cancel Anthropic’s contract and make clear, in public, that such policy limitations are unacceptable.

(…)

But this is not what DoW did. Instead, DoW insisted that the only reasonable path forward is for contracts to permit “all lawful use” (a simplistic notion not consistent with the common contractual restrictions discussed above), and has further threatened to designate Anthropic a supply chain risk. This is a power reserved exclusively for firms controlled by foreign adversary interests, such as Huawei, and usually means that the designated firm cannot be used by any military contractor in their fulfillment of any military contract.