A.I. Needs Design Thinking (Part 1 of 2)
Why some A.I. products feel like magic while others fail at the most mundane
My “smart” bedroom scale refuses to recognize me! Five years into our relationship, it still sporadically greets me as “GUEST.” How hard is it to recognize I’m the same user as yesterday? It’s perplexing to see how some A.I. products fail at such mundane tasks while others deliver truly magical experiences. Case in point, Google Photos can identify my family members in 50-year old photographs, most of whom I can’t recognize myself. What explains this gap in A.I. experience?
It’s tempting to think the difference is simply about A.I. performance. But that seems like a less plausible explanation when you consider that facial recognition classifiers are solving a much more intricate and complicated problem than the classifiers in my scale. In fact, a similar experience gap exists within products that use facial recognition. In 2019, The Independent reported that in London "[facial] recognition technology has misidentified members of the public as potential criminals in 96 per cent of scans." While scanning unsuspecting passersby, 24 of every 25 matches to a police database were "false positives"—a jargon that here describes innocent humans falsely accused by software. In one case, a 14 year old boy was accosted by the police before they realized their error. The £200,000 software trial to reduce violence instead ended in a lawsuit, a government probe, and an outrage by civil rights advocates.
What explains the difference between these dismal results and other products that use similar technology but deliver? Facial recognition unlocks our phones, tags our friends when we post photos online, and organizes our personal photo library. But we don't usually notice some ninety odd per cent failure in our photo libraries or when unlocking our phones. This experience gap—often mistakenly explained away by A.I. performance—deserves a closer look. In this article I present an important reason why the very same powerful A.I. that helps the blind see, diagnoses diseases, and finds lost pets also results in erosion of civil liberties, or in reputable companies like Nikon and H.P. being accused of racism.
Design Thinking
The fact that many A.I. products fail to deliver is not a surprise to anyone who's ever tried to make them. I know this because I’ve co-founded three startups where success hinged on A.I. We used it for personalization, for encouraging behavior change, and even to make self-driving robots. Regardless, in all cases I spent many sleepless nights wondering the eternal question: "Will it ever work?" I once pledged to my team that I wouldn’t shave my face until we ship, because they were losing patience after perpetual delays—everyone was wondering out loud: ”Will it ever work?”
My lighthearted pledge to bolster confidence worked. But along the way, besides finding ways to calm nerves, I also learned important lessons about building A.I. products, none more important than the value of applying design thinking. By encouraging a human-centric view to innovation, design thinking helps put the focus on people’s needs rather than on technology. Design thinking is often used to create better user experiences. That much is obvious. But when applied to A.I., it can also help reduce algorithmic complexity, shorten time to market, decrease development cost, and increase the odds of success for both the product and the business.
How does it do all that? Consider the law enforcement scenario. Despite the failure in London, other police forces have had more luck in using facial recognition. In Florida, it was successfully used to identify detainees who were uncooperative or presented fake IDs. Notice the key difference though: In London they used facial recognition on unsuspecting pedestrians whereas in Florida it was used on detainees who were... well... already detained. The police in Florida had time to review the A.I. results and not overreact to every error. Similarly, my not-so-smart bedroom scale could have been designed to not overreact to every one of its algorithmic errors. Imagine if the scale always defaulted to known users—rather than identifying users as “GUEST” as it so often does today—but afford users the option to quickly reject a match that’s wrong.
The user-centric perspective imposed by design thinking leads us to redefine our products in ways that mask algorithmic imperfections. Intelligently designed products can simplify the task of A.I., just as intelligent applications of A.I. can improve product experiences.
There are many great books, courses, and resources about design thinking, even some that focus on its applications in A.I., for those interested. In Part 2, I will present a two-step framework that I’ve found particularly useful whether to conceptualize a new A.I. product or build new features into an existing one:
Design Failure Away: By understanding the two fundamental types of A.I. failure, we can evaluate their impact on the product and use various levers such as product definition and UX to design those failures away.
Create and Harness Asymmetric Accuracy: Learn about what differentiates a simple A.I. from one that takes significant resources to develop. Understanding this helps us create products that don’t require complex and expensive A.I.
This simple framework can help make products simpler and more elegant. In turn, this reduces development cost, execution risk, and time to market. Make sure to subscribe for Part 2.
To be continued…