#66: Black Magic Bots
There are many ways to define "technology" but my favorite definition is everything that doesn't work yet. At this particular moment, a subset of technology—let's call it "Algorithms"—doesn't "not work yet" so much as it seems actively harmful, but that apparent malice is probably just an extreme case of not working yet, and may attest to our poor grasp of the technology itself, which has led us to assign Algorithms many tasks they're not ready for. We certainly don't all need to understand how everything works, but here we've done ourselves a disservice by misunderstanding them in a specific way that Joanne McNeil recently described in an essay about how Netflix isn't really that good at figuring out what she wants and how algorithms aren't as perfect or objective as we've wanted to believe. She writes, "If technology is routinely legitimized by delusions about its impartiality and misplaced faith in its precision, perhaps a wider public acknowledgment of its capacity to fail might slow its unrelenting advance."
McNeil's key observation is that we've overrated algorithmic technology, assuming it already works well all the time and effectively attributing magical properties to it. Venkat Rao , that we approach artificial intelligence with a religious attitude and a sharp sacred/profane distinction between "AI stuff" and "non-AI stuff." If something seems magical or godlike, of course, two responses naturally follow: the assumption that it's unknowable or ineffable, and a reverence that can easily turn into an equivalent level of fear. Our irrational enthusiasm for Algorithms in the era when we thought they'd figure everything out for us led us to embed them in every domain we could (an era that's not exactly finished—I'm speaking from a hypothetical future). The 20th century mentality that led us to overbuild automobile infrastructure wasn't so different from this. Cars even became objects of worship themselves. Everyone loved Robert Moses when he built a bunch of parks (and parkways) on Long Island, just as everyone loved Mark Zuckerberg when Facebook was muddling from campus to campus and not yet an advertising juggernaut. The second-order problems came later.
When we try to shoehorn an immature technology into too many places it doesn't belong, we ultimately have two options for improving its performance: Admit that we've overreached and assign the new tools to humbler tasks, or reshape the world to make them work better. Rao makes a strong case for the former, suggesting that AI will advance more through mundane "interior" problems like factory automation and "installing ceiling fans" rather than sexy "boundary" problems like chess, self-driving cars, or speech recognition. Unfortunately, there's lots of evidence that we're attempting the second course, trying to simplify the external environment so that computers can better understand it: Facebook flattens human communication by replacing words and emojis with a choice of six "reactions." Amazon is reshaping urban environments as logistics landscapes. Kyle Chayka, reflecting on Google's Smart Reply feature and digital recommendations, asks, "are these really my thoughts, taste, or desires, or are they just an algorithm-generated facsimile of what was once more organic reality?" We're currently stuck with an excess of roads and suburban sprawl that we wish we hadn't built, but we have a chance to stop ourselves before we reproduce that mistake in a new domain.
Reads:
What I learned from a Taipei alley: A thoughtful post by Eugene Wei about why New York doesn't have alleys, why Taipei handles its garbage so well, and the limitations of incremental solutions to problems.
Someone wrote a master's thesis on the aesthetics of sci-fi spaceship design. Via Dan Hon's newsletter: "It has a great flowchart on how to decide whether a given science fictional spaceship is human (e.g. looks like it was manufactured, is grey or blue) or is alien (organic, looks frankly evil)."