A fascinating perspective from Jeremy Kahn at Fortune’s Eye on AI and Ahmer Inam, the chief A.I. officer at Pactera Edge.
Note that the ideas behind Mindful A.I. were coined by Orvetta Sampson, the principal creative director at Microsoft, and De Kai, a professor at the UC at Berkeley.
| (republished from Fortune’s blog) |
Lately, Inam has been thinking a lot about why A.I. projects so often fail, especially in large organizations. To Inam, the answer to this problem—and to many others surrounding the technology—is something called “Mindful A.I.”
“Being mindful is about being intentional,” Inam says. “Mindful A.I. is about being aware and purposeful about the intention of, and emotions we hope to evoke through, an artificially intelligent experience.”
. . .
Inam is arguing for a first-principles approach to A.I. He says that too often organizations go wrong because they adopt A.I. for all the wrong reasons: because the C-suite wrongly believes it’s some sort of technological silver bullet that will fix a fundamental problem in the business, or because the company is desperate to cut costs, or because they’ve heard competitors are using A.I. and they are afraid of being left behind. None of these are, in and of themselves, good reasons to adopt the technology, Inam says.
Instead, according to Inam, three fundamental pillars should undergird any use of A.I.
- First, it should “human-centric.” That means thinking hard about what human challenge the technology is meant to be solving and also thinking very hard about what the impact of the technology will be, both on those who will use it—for instance, the company’s employees—and those who will be affected by the output of any software, such as customers.
- Second, A.I. must be trustworthy. This pillar encompasses ideas like explainability and interpretability—but it goes further, looking at whether all stakeholders in a business are going to believe that the system is arriving at good outputs.
- Third, A.I. must be ethical. This means scrutinizing where the data used to train an A.I. system comes from and what biases exist in that data. But it also means thinking hard about how that technology will be used: even a perfect facial recognition algorithm, for instance, might not be ethical if it is going to be used to reinforce a biased policing strategy. “It means being mindful and aware of our own human histories and biases that are intended or unintended,” Inam says.
A mindful approach to A.I. tends to lead businesses away from adopting off-the-shelf solutions and pre-trained A.I. models that many technology providers offer. With pre-trained A.I. models, it’s simply too difficult to get enough insight into critical elements of such systems—exactly what data was used, where it came from, and what biases or ethical issues it might present. Just as important, it can be difficult for a business to find out exactly where and how that A.I. model might fail.
My favorite example of this is IBM’s “Diversity in Faces” dataset. The intention was a good one: Too many public datasets of faces being used to build facial-recognition systems didn’t have enough images of Black or Latino individuals. And too often the annotations found in these systems can reinforce racial and gender stereotypes. In an effort to solve this problem, in January 2019, IBM released an open-source dataset of 1 million human faces that were supposed to be far more diverse and with much less problematic labels.
All sounds good, right? What company wouldn’t want to use this more diverse dataset to train its facial-recognition system? Well, there was just one problem: IBM had created the dataset by scraping images from people’s Flickr accounts without their permission. So users who blindly adopted the new dataset were unwittingly trading one A.I. ethics problem for another.
Another consequence of Inam’s three pillars is that A.I. projects can’t be rushed. Running a human-centric design process and thinking through all the potential issues around trustworthiness and ethics takes time. But the good news, Inam says, is that the resulting system is far more likely to actually meet its goals than one that is sped into production.
To meet all three pillars, Inam says it is essential to involve people with diverse perspectives, both in terms of race, gender and personal backgrounds, but also in terms of roles within the organization. “It has to be interdisciplinary group of people,” he says.
Too often, the teams building A.I. software sorely lack such diversity. Instead, engineering departments are simply told by management to build an A.I. tool that fulfills some business purpose, with little input during the conceptualization and testing phases from other parts of the company. Without diverse teams, it can be hard to figure out what questions to ask—whether on algorithmic bias or legal and regulatory issues—let alone whether you’ve got good answers.
As Inam was speaking, I was reminded of that old adage, “War is too important to be the left to the generals.” Well, it turns out, A.I. is too important to be left to the engineers.