#50: Type Slowly
Welcome* to #50! This week Kyle Chayka published an excellent piece about algorithmic style—how traditional originality eludes us in the age of targeted recommendations from companies that synthesize trends out of data. He surveys the recent evolution of quantified preferences, from SEO keywords to Spotify Discover to LOT246, which reduces your clothing taste to "a series of signifiers that the service automates and adapts to you." Chayka coins a useful term, Generic Style: "Every platform, canvassed by an algorithm that prioritizes some content over other content based on predicted engagement, develops a Generic Style that is optimized for the platform’s specific structure." Now that several of these platforms have achieved global ubiquity, their various Generic Styles contend for our physical environments, which adapt in visible ways, like becoming conspicuously Instagram friendly. That logic also compels us to similarly adapt ourselves.
It's possible, though, that algorithms aren't the biggest threat to originality and individual taste as we know it. Algorithms, as Chayka points out, have always guided human decisions: "Aren't cities really just highly attuned machines for sorting people according to their interests and desires?" In this observation, Chayka echoes Venkatesh Rao’s observation about online communities (via Marc Andreessen) that geography is the strongest filter bubble. This new digital conformity, perhaps, arises from other characteristics of the internet: our constant exposure to the information firehose, the eradication of distance (and consequent decline of regional difference), or our increasingly sophisticated tools for projecting our personas out toward our little audiences. Much of this used to happen in "dumb" analog ways and Chayka concludes his essay by suggesting that we recover some originality by becoming a bit more analog ourselves.
I have a suggestion: Companies that collect data about us shouldn't just show us what we have in common with their data-derived archetypes. They should show us each our individual quirks, which they also observe. Every quantitative model of human behavior consists of the model's actual prediction—its stereotype about each of us—and its error, or the unique qualities we exhibit that don't fit the model's stereotype ("Drew loves Steely Dan but never listens to Countdown to Ecstasy"). No matter how granular they get, the models that chase us around the internet and cloyingly attempt to reflect our selves back to us never quite get it right, and the unavoidable "error" is why (the better the model, the smaller the error, but it will always exist). Platforms make money by grouping us according to shared characteristics that enable targeting at some larger scale than the individual; by showing us this part of ourselves and disregarding the weird unclassifiable part, they subtly reinforce a conformist worldview that we surely internalize. What if they showed us both?
I wrote a longer essay on the blog about contemporary transportation and the value systems we use to approach the subject. It's effectively a book review of Ivan Illich's fantastic, radical Energy and Equity. Check it out!
*The subject line of #1 (which only eight of you saw) was "More Human than Human," the title of a White Zombie song. I revisited the '90s rock theme for this milestone ("Type Slowly" is a Pavement song). This is also a nod to Will Leitch, whose newsletter format I adopted and who named his first 101 newsletters after Nirvana songs but finally ran out this month.
Reads:
"Databodies in Codespace" by Shannon Mattern. A thoughtful exploration of the relationship between the public sphere and the Quantified Self.
Don't print this PDF unless you need one square kilometer of black paper.
Until next time,
Drew