This post was originally published on this site
https://content.fortune.com/wp-content/uploads/2022/09/GettyImages-1237551834-e1664193524810.jpgWhen Google engineer Blake Lemoine’s claims that the company’s A.I. had grown sentient hit the news, there was expected hand-wringing over A.I. bots and their rights, a backlash from the A.I. community explaining how A.I. could not be sentient, and of course, the philosophizing about what it means to be sentient. No one got to the critical point of interest: that non-sentient, mathematical formulas carry as much, if not more, weight than humans when it comes to decision-making.
Putting aside the topic of A.I. sentience, there’s something more fundamental to consider: What does it mean to give so much decision-making authority to something that by design is usually non-tangible, unaccountable, inexplicable, and non-interpretable? A.I. sentience is not coming soon–but that doesn’t mean we should treat AI as infallible, especially when it’s starting to dominate decision-making at major businesses.
Today, some A.I. systems already have tremendous power over major outcomes for people, such as credit scoring models that can determine where people raise families or healthcare situations whereby A.I. can preside over life-and-death situations, like predicting sepsis. These aren’t convenient suggestions, like a Netflix recommendation, or even processes that speed up operations, like handling data management faster. These A.I. applications directly affect lives—and most of us have no visibility or recourse when the A.I. makes a decision that’s unintentionally inaccurate, unfair, or even damaging.
This problem has sparked calls for a “human in the loop” approach to A.I.–which means that humans should be more closely involved in developing and testing models that could discriminate unfairly.
But what if we didn’t think about human interaction with A.I. systems in such a one-dimensional way? Thomas Malone, a professor at MIT’s School of Management, argues for a new approach to working with A.I. and technology in his 2019 book, Superminds, which contends that collective intelligence comes from a “supermind” that should include both humans and A.I. systems. Malone terms this as a move from human in the loop to “computer in the group“, whereby A.I. is a part of a larger decision-making body and–critically–is not the only decision maker at the table.
This concept reminds me of a colleague’s story from his days selling analytic insights. His client explained that when leadership sat down to make a decision, they would take a printed stack of A.I.-generated analytics and insights and pile them up at one seat in the conference room. These insights counted for one voice, one vote, in a larger group of humans, and never had the final say. The rest of the group knew these insights brought a specific intelligence to the table, but would not be the sole deciding factor.
So how did A.I. seize the mantle of unilateral decision-maker? And why hasn’t “A.I. in the group” become the de facto practice? Many of us assume that A.I. and the math that underpins it are objectively true. The reasons for this are diverse: our societal reverence for technology, the market move toward data-based insights, the impetus to move faster and more efficiently, and most importantly the acceptance that humans are often wrong and computers usually are not.
However, it’s not hard to find real examples of how data and the models they feed are flawed, and numbers are a direct representation of the biased world we live in. For too long, we’ve treated A.I. as somehow living above these flaws.
A.I. should face the same scrutiny we give our colleagues. Consider it a flawed being that’s the product of other flawed beings, fully capable of making mistakes. By treating A.I. as sentient, we can approach it with a level of critical inspection that minimizes unintended consequences and sets higher standards for equitable and powerful results.
In other words: if a doctor denied critical care or a broker denied your loan, wouldn’t you want to get an explanation and find a way to change the outcome? To make A.I. essential, we must assume algorithms are just as error-prone as the humans who built them.
A.I. is already reshaping our world. We must prepare for its rapid spread on the road to sentience by closely monitoring its impact, asking tough questions, and treating A.I. as a partner—not the final decision-maker—in any conversation.
Triveni Gandhi is responsible A.I. lead at Dataiku.
The opinions expressed in Fortune.com commentary pieces are solely the views of their authors and do not reflect the opinions and beliefs of Fortune.
More must-read commentary published by Fortune:
Sign up for the Fortune Features email list so you don’t miss our biggest features, exclusive interviews, and investigations.