This post was originally published on this site
https://content.fortune.com/wp-content/uploads/2022/09/GettyImages-1366362158.jpgArtificial intelligence empowers enterprise leaders with digital capabilities needed to transform their businesses. Across industries, we see A.I. driving process improvements, accelerating new product developments and enhancing customer experiences.
And its use is literally, everywhere—from robo-advisors providing investment recommendations to predictive maintenance improving machine utilization to recommendation engines facilitating commerce—A.I. is disrupting how we live and work.
Yet, for most enterprises, true innovation comes not from experimentation but from industrialization at scale. In this article, I highlight three best practices to making the most of emerging A.I. capabilities across the enterprise.
Start with the question, not the answer
In the world of A.I., it can often appear as if there are brilliant answers looking for the right question. But instead of starting with the answer, business leaders must start with the problem first—reimagining what a solution would look like, and then assessing how and which digital technologies and A.I. approaches best deliver that. Similarly, A.I. product and platform providers can also take a page from history. Just like the original personal computer wave, instead of selling integrated circuits and graphics processors, the right level of abstraction is needed to show A.I.’s power, and how it can support everyday business needs.
We know the most successful A.I. applications are not the ones with the fanciest technologies, but instead the ones where there’s clear and measurable impact to bottom lines. This is important to fully appreciate because often we associate A.I. with massive, complex projects, like full-autonomous driving, for instance. And the reality is that these “big A.I.” projects do have a role to play, but they can be intimidating to some business leaders and inapplicable to many businesses.
Beyond big A.I. though, there are significant opportunities with “small A.I.,” such as language processing to fully utilize the data in your enterprise, or voice processing to recommend best solutions in service calls, or recommendation engines to sift through large data sets and find patterns we cannot otherwise see in order to drive critical supply chain decisions. In fact, these kinds of A.I. capabilities have become as mainstream as they have become accurate, making them predictable to implement and easy to consume. So instead of searching for “big A.I.” answers, enterprise leaders would do well to look to “small A.I.” toolkits to assemble the best solution for their specific business needs.
Getting the culture right
The thing that is different about A.I. when you compare it to the Automation wave that preceded it, is that whereas automation digitizes a business process—making it faster, cheaper, and scalable—A.I. transforms the business process, actually changing the way work gets done. There are significant implications to this difference and new operating models and redesigned processes become new keys to success.
As a result, whereas A.I. may deliver the core foundation for transformation, it needs strategic, synchronized and programmatic execution across the dimensions of people, process, data, and technology to be successful. Change management has to be intentional and planned in order to drive adoption at scale. And as the design of the end-user experience becomes essential, a deep understanding of the end-to-end process, industry context, and business policies becomes important to fully weave in.
Key to enabling A.I., data is now becoming the largest driver of transformational value for organizations. But data is often strewn across multiple entities and business units, without a common model for ownership, usage, storage, and often lacking the central governance around master data, hierarchies, and lineage. In addition, data has the most value when it is contextualized for, and closest to the business it is in, and syntax and ontology take advantage of the knowledge of industry nuance and context.
So, the balance between divisional ownership and central governance becomes important in developing a data culture in the enterprise. Success requires broad based engagement, championship, and literacy and come down to company culture. And what has become increasingly obvious is that companies that have the right culture end up most successful in A.I.
Digital ethics is foundational
Companies that build and succeed with industrialized A.I. systems in the long run will not get there by chance—they get there because they focus on building digital ethics and governance into their platforms right from the start. For organizations that fail to do so, it is not just about opportunity lost; they expose themselves to significant reputational, regulatory, and legal consequences.
With the proliferation of personal information—from web tracking to home cameras to health statistics—enterprise leaders must think strategically about how to design ethics into their A.I. programs and underlying data sets. Securing data access and designing roles and responsibilities that are different depending upon whether you are the owner, transporter, user, or custodian of the data, is a basic requirement. Beyond data, A.I. algorithms themselves must be actively managed to remove unintentional built-in biases that could impact vulnerable communities, or model drifts that can unfairly disadvantage segments that aren’t well represented in the data sets.
But A.I. is neither inherently good nor bad. It all comes down to how it is used. In HR functions, for instance, the use of A.I. has often being criticized; one infamous use case unfairly replicated past demographics into new roles reducing the potential for diversity and inclusion. Equally, in other use cases within HR, A.I. programs routinely review and modify job descriptions to eliminate unintended gender bias and expand the interested candidate pool to broader and inclusive segments. What we have learnt is that intended use needs to be as much as part of A.I. design and engineering as it is a functional and business responsibility.
As use cases of A.I. expand, having a strong governance in place to proactively monitor associated digital ethics is becoming key. Just like corporate boards have audit or compensation committees, I believe the best corporations will come to have governance built into their core leadership practices and ethics subcommittees on their boards.
Sanjay Srivastava is chief digital strategist at Genpact. Genpact is a partner of Fortune’s Brainstorm Tech.
Sign up for the Fortune Features email list so you don’t miss our biggest features, exclusive interviews, and investigations.