There is increasing awareness that social media can be bad for the mind—we use words like "addiction" and "distraction"—but we rarely discuss how social media can reshape the contents of the mind.
Specifically, we underestimate the implications of platform selection bias.
Consider the 280-character aphorism format, known as a "tweet" when broadcast to the Twitter network. The 280-character limit is one quantitative aspect of this format, but formats also have obscure qualitative aspects. For instance, Twitter tends to favor certain stylistic templates, based partially on what people desire and partially on what the algorithm wants for itself.
The main problem is not that people will overproduce what the platform rewards. This is a problem, but it's somewhat self-correcting.
The more serious problem is that authors on a platform will stop thinking thoughts unlikely to be rewarded by the platform. This problem is not self-correcting but self-effacing.
When the expected memetic traction of an idea is conditional on how the idea is formatted, we do not merely bend the outward expression of a new idea into advantageous formatting—as if we think purely first, and only later publish instrumentally and politically. Rather, we begin to pre-format thinking itself, and avoid thoughts that are difficult to format advantageously. We feel that we publish purely and freely, but only because we've installed the instrumental filter at a deeper, almost unconscious level.
Think of all the true or beautiful statements you would never post to Twitter.
Think of all the true or beautiful ideas you might post to your email list, but which you never get around to publishing because they just don't feel "valuable" enough for the newsletter format.
These true and beautiful ideas might be the kind of ideas that a Goethe or a Kierkegaard or an Emerson might accumulate over time and compile into an epic book. But today you are not creating or accumulating such ideas because your media channels do not reward such ideas.
If you count up all of the true or beautiful expressions that thinking beings are not sharing with other thinking beings today—which they would otherwise share given a different arrangement of media channels—you can start to build a picture of the total negative utility caused by platform selection bias.
These are the kinds of realizations that got me into Urbit. One reason I like Urbit is that there are no algorithmic selection effects. There will be algorithmic selection effects eventually, but what's unique about Urbit is that communities will engineer their own selection effects.
A community of writers and software developers could decide, together, what types of channels should exist, how they should be linked, and how content should be surfaced (or not surfaced). Now that money is becoming programmable on Urbit, intelligently designed community structures will begin to look more like startups.