This post was originally published on this site
Although many companies talk about artificial intelligence, it’s likely that the majority of their employees aren’t actually using machine-learning technologies in the workplace.
One big reason for that is while executives may be excited about A.I., employees may feel threatened or even insulted that managers would force them to use tools that that they fear will one day replace them.
As FedEx senior data scientist Clayton Clouse said during an A.I. conference in San Francisco last week, “We shouldn’t expect that people will jump up and down and be excited when we say, ‘Hey, we’re going to be augmenting your job with A.I.’”
Citing a survey about A.I. from McKinsey, Clouse said that while the majority of companies polled by the consulting firm said they were implementing A.I. either in their business or through pilot projects, “only 6% reported that their employees were actually using the system the way they should be used.”
The employees, it turns out, are skeptical about A.I., especially machine-learning tools intended to automate decision-making in some way, Clouse said. If workers don’t trust the A.I. tools to do as good of a job as them, they simply aren’t going to use them, he explained.
To get employees to trust A.I. tools, Clouse said that companies must carefully debut their A.I. projects in multiple stages and communicate to workers just how the products are intended to help. During an A.I. product’s testing phase, or beta test, managers should choose the appropriate employees who are excited about using the tools, as opposed to a randomly selecting a bunch of people who may resent having to attend more meetings than they have to.
Companies also can’t simply rely on their “data nerds” to help test the A.I. products, Clouse said. They need a handful of “general users” who can “speak the same language” as the rest of their colleagues who lack technical pedigrees. Once the beta testing phase is over, companies should hold small workshops so that workers understand how the tools work—and their limitations.
It should be noted that machine-learning tools often make their predictions based on so-called “confidence scores.” These scores are used to explain how likely a machine-learning-powered cybersecurity tool believes an anomaly in a corporate network is actually a legitimate threat worth investigating.
Managers need to tell workers about the confidence score settings of their machine learning tools, so that employees don’t get caught off-guard, Clouse explained. For instance, a member of a corporate cyber security team may be less likely to get annoyed with a machine learning-powered cybersecurity tool that’s set to flag as many anomalies as possible as security threats, if he realizes that management agreed on the setting.
Ultimately, the goal for management is to debut an A.I. tool into the workplace that employees will actually want to use. Companies should not automatically assume that workers will want those A.I. products, and need to think hard about how they debut them—or else risk the consequences.
“You’re going to end up pissing off a lot of people,” Clouse said. And those upset workers will ensure that “you’re next A.I. project will have an even more difficult path to adoption.”
Jonathan Vanian
@JonathanVanian
jonathan.vanian@fortune.com
A.I. IN THE NEWS
Human translators would help. ProPublica reported that the U.S. government has recommended Google Translate as a tool that officials should use to help “decide whether refugees should be allowed into the United States.” Besides noting that Google already advises people that its translate tool is “not intended to replace human translators,” the article details the problems the tool has understanding the nuances of language, which could lead to a host of potentially bad scenarios. As the article states, “The government may misconstrue harmless comments or miss an actually threatening one,” because of Translate’s limitations.
A.I. hiring hits the United Kingdom. Companies in the U.K. are now using A.I. technologies to analyze people’s facial expressions and voices during job interviews, The Telegraph reports. The article discusses Unilever and its use of technology from Hirevue and contrasts both the pros and cons of using A.I. for hiring, such as potential bias concerns.
Beating the language benchmarks. Google AI and Toyota Technological Institute of Chicago researchers created an A.I. system called ALBERTA that has achieved “state-of-the-art results” on several popular benchmarks involving natural language processing, a type of A.I. that understands human language, VentureBeat reported. The article explains that ALBERTA improves upon Google’s previous BERT system, which other researchers have used to build their own A.I. models that have performed well on benchmark tests related to language.
Alibaba’s new chip. Alibaba has developed its own computer chip designed to handle A.I. inference tasks, which is when A.I. systems makes predictions based on the data they ingests, the South China Morning Post reported. The article said that the new chip is “currently being used within Alibaba to power product search, automatic translation, and personalised recommendations on the company’s websites.”
BEWARE OF BENCHMARKS
Facebook CTO Mike Schroepfer talked to Fortune about the problems with benchmarks and the false impression they may create regarding A.I.’s overall abilities. “The mistake we can make is constructing a very specific benchmark and then creating something that is better than everything else, or even people sometimes, at that benchmark,” Schroepfer said. “You’ll see these big headlines, ‘A.I. is better than people at language,’ and they basically assume all tasks, when really all it was was better at a very, very specific thing.”
EYE ON A.I. TALENT
Yum! Brands, the fast-food giant that operates KFC, Pizza Hut, and Taco Bell, hired Clay Johnson to be its chief digital and technology officer. Johnson was previously the chief information officer and global business services head at Walmart.
Dynam.AI picked Dr. Michael Zeller to be the enterprise startup’s CEO. Zeller was previously the senior vice president of A.I. strategy and innovation at Software AG.
EYE ON A.I. RESEARCH
Deep learning on the highways. Researchers from the University of Siena in Italy published a paper about using deep learning to create a video-surveillance system that analyzes traffic on highways. The researchers wrote that their “results have shown that these networks can efficiently learn the temporal information from the video stream, simplifying the feature engineering process and making very promising predictions.“
Cancer research gets a supercomputing boost. Researchers from Oak Ridge National Lab and Sony Brook University published a paper about using high-performance computing (HPC), or the use of supercomputers, to boost the efficiency of neural networks used to aid cancer research. The team wrote, “The ability to use HPC to produce networks that are capable of fast and accurate predictions makes HPC a significant enabling technology in using deep learning for scientific analysis.”
FORTUNE ON A.I.
Learning to Love the Bot: Managers Need to Understand A.I. Logic Before Using It as a Business Tool– By Jeremy Kahn
A.I. Security Cameras Are the Latest High-Tech Attempt to Combat Mass Shooters– By Bernhard Warner
Amazon Unveils Echo Studio Speaker, Noise-Cancelling Earbuds, and Smart Glasses– By JP Mangalindan
BRAIN FOOD
Musical minds. Rolling Stone published a short profile about musician Holly Herndon, who used neural networks to help create music for her most recent album, PROTO. Herndon, who has a PhD in composition from Stanford University, trained a neural network with human vocals so that the software—dubbed Spawn—learned to mimic the voices. She then wove the A.I.’s vocals into her songs, creating an “uncanny beauty” like “an alien child moaning and wailing in harmony with its mother,” the article said. Herndon, the story notes, has also made “explicit the human labor that went into Spawn,” in order to “help set the norms for A.I. music and prevent uncredited mooching off others’ music.” Herndon said: “I really wanted to make audible the people who went into the training data.”