One important implication of improving machine intelligence is that “content” will become increasingly convertible between formats and channels.
To successfully publish across multiple channels today, at a serious level, you usually need to have a team working for you. But over the next several years, small solo creators will increasingly become omni-channel creators, as convertibility becomes cheaper, faster, and higher-quality.
For instance, a solo video creator on Youtube may also become a newsletter writer and a podcaster at the same time, which is something we haven’t really seen yet. This opportunity may require some modification of the initial format’s production process, to make it maximally convertible. If this happens, we might start to see strange trends in content, as creators learn to think and create in ways that will make the AI post-processing pipeline faster and more reliable. Just as Youtubers today converge on certain weird behaviors to win the algorithm (“be sure to like and subscribe,” she says with completely unbelievable excitement…), we might see similar stylistic trends caused by the promise of AI.
This is Heidegger’s observation, by the way, that the power we can derive from technology tempts us to pre-format our thinking to be legible by technological systems ("...modern technology is a challenging, which puts to nature the unreasonable demand that it supply energy that can be extracted and stored as such").
Notice that this dynamic still holds even if you think AI will never surpass or match human intelligence. In the most bearish case for AI, it would only mean that some editing continues to be required. We know this because even already, as of today, convertibility is much better, cheaper, and faster than it was two years ago. So the only thing to debate is the degree to which, and the rate at which, convertibility approaches perfect and free—not whether this dynamic will occur.
For another implication, consider that some formats are more easily convertible than others. This means some creators will be more advantaged by AI than others, at least in this dimension and at least in the medium-term.
The differential convertibility of formats may have significant strategic implications for solo creators just starting out. If you’re a young person and all you know is that you want to express yourself or develop ideas in the world, but you’re indifferent between writing, audio, and video, there’s a case to be made that—other things equal—you should start creating videos. As the most data-rich format, text and audio are easily spun off with high fidelity, whereas we’re very far from converting audio podcasts to watchable videos.
Even still, many top videos are scripted, so perhaps writing indefinitely remains the real coin of the realm.
On the other hand, OpenAI Whisper just made dictation the fastest and least-draining way to write first drafts—to get content from your mind into text.
So basically, the process that I’m increasingly utilizing in my own work is something like this…
First, I use my irreducibly human powers to decide what is worth studying, thinking about, etc. As I’ll discuss below, the stakes here are increasingly high.
Other things equal, I am increasingly preferring dictation to jot down first drafts of ideas or hypotheses or concepts, or observations about books I’m reading. Currently my workhorse is the iOS app Scribe from Vienna.
It just so happens that I've used dictation for this purpose quite often over the years; even when it sucked and required heavy manual editing, it's just so fast and easy to get something "on paper." Now with apps like Scribe, it's really close to perfect almost all of the time, including punctuation, proper names, foreign words, etc.
AI for the learning part of the pipeline is, in my experience, not quite good enough to be worth much time or attention yet. For summarization and fact-finding and these types of tasks, AI just doesn't seem good enough yet to quickly and reliably give me any kind of advanced information. Currently, ChatGPT can sometimes get dates and numbers from history wrong, so to really publish anything with the help of ChatGPT, you have to go check the sources to be sure, at which point you might as well just skip ChatGPT and work only from the sources.
I use my irreducibly human powers to edit text to my liking, and decide what I should do with it.
But I’m already actively using AI to convert between formats for different channels.
I might need to write a separate post on this, because there’s enough action here to be worth a deep-dive, but let’s just say that savvy writers and creators can already use these tools in production—cautiously. They require a lot of finesse right now, and cheap automated content will never fly with a sophisticated audience. But realize that many romantically celebrated scholarly labors are really just ideologically glorified algorithms. Hint: Translation is 99% a convertibility problem, 1% artistic human finesse.
Translation has a romantic aura but that’s just commercial advertising by bookish polyglots. Want proof? I only have a feeble smattering of intermediate French but I can now produce perfectly serviceable and authentically original translations from French to English. Some will protest—stolen valor!—but you must understand that authentically learning a language is literally just memorizing an algorithm. The human value-add of the thoughtful translator is to soften and adjust the first draft of the algorithm which they run in their memory. Today, I run the same algorithm on a computer, no more or less machinic than when it runs on human wetware, and I contribute the same human finesse at the end. I am an original translator and I am valid. Comparative Literature PhDs will protest but of course they would. Their comparative advantage is toast.
Now, notice what’s missing.
What are the most future-proof elements of the human writer or creator?
First, it’s the creator’s brand: The accumulated human perception of a creator’s cognitive, aesthetic, ethical, and sociological character, placement, meaning, and value. If that sounds pretentiously grandiose, notice that the word "brand" is, on the contrary, vulgar and reductionist. Both perceptions are correct, and together they help us understand why brand will become more valuable as AI becomes commoditized.
Brand works precisely because it indexes a diffuse cloud of human sensibilities. As all rote work gets offloaded to AIs, what will really distinguish creators is whatever they're really, ultimately representing.
One might say that the advancement of AI is the process of revealing what creators ultimately represent. Today, the most vapid jock in the world can grow a Twitter audience to 50k followers publishing lightly paraphrased versions of inspirational ideas already published by others. The traits that enable this type of publishing success are traits like ambition, hard work, self-confidence, perhaps a little bit of shamelessness, and so on. In all earnestness, it takes a lot of legwork and discipline to publish a bunch of copypasta Twitter threads everyday. But as this legwork becomes commoditized, the noise increases and real signal becomes comparatively more precious.
The implication is that creators right now should be focusing everything on solidifying their signal.
There are two main components to your underlying signal, which is the same thing that people mean by “brand.”
The first is just crystallized knowledge, i.e. what do you actually know? If not much, you better get started studying something real, and studying it deeply. It’s one of the only ways you’re going to rise above the swelling deluge of noise. In all of my experimentation, I’m struck by how badly AI tries to deal with interesting and unique pieces of knowledge I’ve derived from paper books over the years. Go and try it. It’s very comforting to see that you still have access to troves of knowledge currently impenetrable to AI.
Don’t waste your time fiddling with the explosion of new AI tools (which is a mimetic trap that has low expected value on a risk-adjusted basis, given that most early tools are finicky and won't survive).
Spend your time redoubling your rare, specific, unique knowledge. As AI becomes commoditized, it will get baked into everyone’s everyday computing tools, so all of the alpha will come from actually having something to say.
The second element of your unique signal is what we might call your style—everything you love, value, and feel intensely, in the unique way that you personally love, value, and feel it. This is the only other thing that AI will not encroach upon any time soon (some would say never, though I prefer to remain agnostic.) It’s also one of the main reasons anyone in the world follows or subscribes to anyone else in the world, other than unique crystallized knowledge.
Notice that much of your irreducibly human value-add comes at the very beginning of the work chain: Deciding what books to read (because you think they are valuable or fascinating), deciding what content not to consume (because it's not suited to the type of person you are, or wish to be). That's why I sold my TV. The stakes are so high right now; there will be so much upside in merely being correct that I want to ensure I spend all of my waking hours trying to be just a little more correct.
This is also why I've become apocalyptically passionate about Urbit, because I believe it's our best chance to save human minds from the internet without giving up the power of networked computing.
These implications are, I think, fairly counterintuitive. Many think the rise of AI means they should learn to code, or they should be “prompt-engineering” all day, or writing blog posts with still-crappy AI assistants barely serviceable for shameless copywriters. Some of these things might be fun, and in the very short-term there might be some alpha in narrow contexts, but if AI continues to advance, then all of this tinkering today will be washed out by the larger tidal wave.
As that wave gathers force, all you have to do is answer one question: Who are you, exactly?