#72: Machine Weirding
It's been interesting to watch the shifting attitude toward artificial intelligence and its capabilities over the last couple of years. The hope and fear that AI would be able to do anything humans do, and thus make us irrelevant in most domains that don't require having a body, seems to have faded in favor of nervous ridicule: Computers won't take over the world if they can't even learn not to recommend Incubus to me on Spotify (to use a recent personal example). Venkatesh Rao wrote one of the best assessments of this condition, observing that we attribute an irrational mysticism to artificial intelligence's "boundary problems," like autonomous driving, but would be better off thinking of most AI applications as patching the hole created by "ripping out a human from the solution to a problem." Adopting this approach—understanding that AI is frequently just more mundane digital plumbing, rather than magic—will enable us to ask the next question: What kinds of problems do we actually want to remove humans from?
There is one broad category of tasks that artificial intelligence is unquestionably good at already: Reflecting existing human behavior back to us, often in unflattering ways; outlining the contours of how people are embedded in the world and what those same spaces might look like without them. In October, Amazon scrapped an experimental recruiting tool that used machine learning to analyze resumes and recommend the best candidates, because it turned out that the tool had "taught itself that male candidates were preferable." The software, of course, learned its gender bias by synthesizing Amazon's existing hiring practices and then decontextualizing them so that the very people who already operationalized the bias could be horrified by it at a slight distance. Hiring employees may be too deeply human to automate with software, but Amazon's machine learning tool served a valuable, unintended purpose, exposing unhealthy behavior to people who couldn't otherwise see it.
Maybe, then, we can embrace AI not as a way to finally remove humans from as many situations as possible (particularly the situations that humans actually like being in), but as a way to understand how we occupy those situations in the first place, and perhaps learn to get better at doing so. If not exposing toxic HR practices, we can at least use AI to become more interesting. This overly pessimistic article recently lamented the "unbearable sameness of cities," blaming American urban uniformity on Instagram-fueled Brooklynization. Machine learning could help: By training software to generate new arrays of speakeasies, food halls, and gastropubs, we could understand better how unimaginative and repetitive that stuff gets, and thus raise the bar. In Japan, the word unibare is used to make fun of someone who is clearly wearing all Uniqlo clothing. Having a word for this concept induces creativity; everyone ridiculed for dressing unimaginatively gets an opportunity to try harder. Artificial intelligence often ridicules us similarly; instead of asking computers to dress us, we should use their recommendations to become less predictable.
Reads:
What it's like to walk to LaGuardia Airport. "I had this theory that airports would be better—both as transportation facilities and civic spaces—if they were more intimately intertwined with the cities they serve." See also: the Tom Chiarella classic, "Walking to the Mall" (which gets bonus points for taking place in my hometown, Indianapolis).
Kate Wagner (@mcmansionhell) on how changes in architecture and dining trends have resulted in restaurants getting louder.
I Don't Date Men Who Yell at Alexa. Another convincing argument that how we treat robots and computers reflects, and maybe also influences, how we relate to other humans.