The requirements for seizing generative AI advantages

This post was originally published on this site

https://content.fortune.com/wp-content/uploads/2023/09/GettyImages-1511008656.jpg?w=2048

When OpenAI launched ChatGPT in November 2022, generative artificial intelligence instantly went from esoteric curiosity to mainstream and provocative. Since then, the urgency to apply this technology across nearly all industries has only intensified. Applications as varied as rapid medical science discoveries (AI recently helped identify a new superbug-killing antibiotic), quicker coding in software development, or optimization of business processes to accelerate data-driven decision-making have all been part of this frenzied rush to ride this technology wave.

While AI has quickly become a topic of conversation in the business world, and business leaders are optimistic about its ability to augment the workforce and drive productivity, only 44% of organizations are rolling out or scaling up adoption, according to Workday’s C-Suite Global AI Indicator Report. Nearly half (49%) of CEOs say their organization is unprepared to adopt AI and machine learning (ML), because of a lack of tools, skills, and knowledge.

But for many companies, that won’t be the case for long. More than two-thirds of organizations plan to increase their AI investments in the next three years, according to McKinsey. The potential benefits, and the will to realize them, are huge. Now organizations must form a strategy to safely capitalize on the technology.

Fully leveraging the capabilities of generative AI, and mitigate its risks, requires three essential things: quality data, responsible implementation, and a strategic partnership between the C-suite and IT.

Data quality: Generative AI’s foundation

The large language models (LLMs) behind ChatGPT and similar applications are usually trained on broad swaths of language data scraped from the web.  They have proven very effective at outputting long-form, well-structured natural language. However, they have also been shown to frequently produce undesirable outputs, like things that are factually incorrect (known as “hallucinations”), or even toxic or biased content. That is not very surprising, given how those things are present in the data they were trained on.

As a result, data integrity is a major concern for the C-suite. About two-thirds (67%) of CEOs view “potential errors” as a top risk of AI and ML integration, and only 4% of all respondents said their data is completely accessible, according to Workday’s AI Indicator report.

That perception is on point. Many organizations generate high-quality, clean data, but have yet to build a strong data foundation. Instead, they’re struggling with siloed, inaccessible data or data that’s not uniformly structured or even fully digitized.

Executives who want to create and implement generative AI tools need to put the necessary building blocks in place first: high-quality, reliable, and easily accessible data. Without that, investment in AI is unlikely to produce sustained value. 

The good news is that most of the recent, noteworthy generative AI problems—incorrect outputs or IP infringements—were a result of using those broad, web-scraped data sets. In a business context, data sets are usually higher-quality. They’re smaller, more focused, and proprietary, all of which help mitigate some of the risks.

Responsible implementation: A human-centric approach

Applying generative AI in a responsible way means implementation must be grounded in respect for privacy, security, and human judgment. The tremendous publicity around recent advancements, and some of the publicly visible issues LLMs have demonstrated, have made many leaders aware of potential risks to the technology, and they’re being proactive in addressing them.

Privacy, security, and accuracy—top concerns flagged by CEOs in Workday’s report—should stay front and center. Still, just 21% of companies report having a responsible AI governance program for how employees can use generative AI at work, McKinsey found, showing there is plenty of room for improvement. Generative AI systems can be built out safely and responsibly, with secure, transparent data sets that protect against bias and delivers tangible benefits.

When used properly, AI can do things like boost employee retention, improve auditing, or do advanced workforce skill mapping, and it can do all of that without displacing people. An important step in building trust and securing buy-in among employees and customers is developing a responsible governance program to articulate AI ethics principles that puts people at the center.

The IT department: Your strategic partner

The influence of AI on the world of work, similar to the transformative impact of the internet, requires a comprehensive company-wide approach, with IT positioned as a partner to drive and maximize its benefits. The C-suite’s strategic vision and IT’s technical expertise can be combined to drive innovation and gain a competitive advantage. 

As AI and ML applications multiply, and become central to running a globally competitive business, the strength of this partnership is crucial.  Moreover, it strengthens risk management by identifying, and mitigating, potential pitfalls in AI implementation. With a strong partnership in place, organizations can establish the necessary guardrails to ensure responsible AI practices while maximizing the tangible business benefits of generative AI technologies.

With a growing number of off-the-shelf enterprise generative AI tools now available, sitting on the AI sidelines is no longer an option. Remaining competitive means fully leveraging AI’s transformative potential. This is not just a choice; it is the defining step toward securing a competitive edge and ensuring future relevance in an increasingly digital world.

Shane Luke is vice president, head of AI and machine learning at Workday. Workday is a partner of Fortune‘s Brainstorm A.I.