The buzz is now deafening: labeled “machine learning,” or “artificial intelligence” (AI), or just “smart-[insert object here],” the idea that computers are interacting with the real world in ways we had thought distinctively “human” has swelled to a sloppy, hyperbolic din. Christie’s fall sale of an “AI painting” is representative: the price was ridiculous and the picture was ugly, so the click-bait was irresistible. But the event did little to clarify what is interesting or valuable or provocative about the current state of human-machine relations.
The implications of machines learning, rather than just executing instructions, are profound, though the basic idea is pretty simple: rather than telling the computer to, say, move a pen 2 cm, turn 90° and repeat three times, you show it lots of examples of squares, ask it to draw one, and feed it digital doggy treats until it gets it right. (Needless to say, God and the devil both lurk in the details). Most of us use computers to improve execution—we want them to do what we tell them to do, but we also want them to change “teh” into “the” without asking. The big—potentially existential—question is this: how much “without asking” are we okay with?