This post was originally published on this site
https://content.fortune.com/wp-content/uploads/2023/05/Crypto-Robots-AI-In-Check.jpg?w=2048In August 2022, Daniel Shorr was afraid. Inside his room at a gray Stanford dormitory, the 23-year-old was scrolling through Twitter when he saw someone post a painterly illustration of a woman walking past a neon-lit storefront. It was “beautiful,” he told Fortune.
The art, however, wasn’t the work of a human. Its author—inspired by the prompt, “Los Angeles at night”—was Stable Diffusion, an artificial intelligence model that generates images from text. The algorithm’s artistic prowess made Shorr have a “WTF moment.”
“Intelligent algorithms will become a bigger and bigger slice of our world’s compute diet,” he wrote shortly after in a note on his MacBook. “Cryptography is the only mechanism for keeping these powerful algorithms accountable.”
Along with his freshman-year roommate, Ryan Cao, Shorr decided to put his Post-it note musings into action. Upon graduating from their master’s programs at Stanford, he and Cao founded Modulus Labs, a startup focused on machine-learning accountability. And to keep A.I. algorithms in check, they plan to use a technique that’s recently taken crypto by storm: zero-knowledge proofs. So far, they’ve raised $1.3 million and are organizing another funding round for approximately $5 million.
“We know, we know—sure, ‘wHaT If wE pUt thE AI On tHe BlOcKchaIN?’ sounds like something a seven year-old would dream up,” they recently wrote in a Medium post. “But then again, in our experience, seven year-olds can be surprisingly wise.”
The recent Stanford graduates are not the only ones who believe zero-knowledge proofs are a check on A.I. They are part of a community of researchers and blockchain entrepreneurs who believe in the fusion of this buzzy cryptographic technique with the field of machine learning. While the technological combination is immediately relevant to the world of crypto, they say, it may soon find applications beyond the blockchain.
“Especially as we start to realize, in very ugly ways, how easy it is to tamper with these machine intelligences,” Shorr told Fortune, “we are going to rely on the hard bronze of mathematics.”
Why verify A.I.?
In a matter of months, ChatGPT, the A.I. language model developed by OpenAI, has become an inextricable part of the national zeitgeist. A future where A.I. algorithms, say, diagnose cancer or trade billions on the stock market seems a lot less like science fiction.
“If you look at the rise of A.I. technology, the growth has been astonishing,” Daniel Kang, an incoming assistant professor of computer science at the University of Illinois Urbana-Champaign, told Fortune. “And I certainly do think it will be the case that within five years, we will want transparency into a lot of these algorithms.”
Kang—like Shorr and Cao at Modulus Labs—believes that as A.I. models become more powerful, they need more accountability. As an example, he said, when patients are currently admitted to hospitals and their care goes awry, they can request hospital records to check doctors’ decision-making. “But if you do imagine a world where some diagnostics, or even some decisions, are made by, say, a medical model,” he said, “you probably do want to have that same level of accountability.”
Or picture a future where A.I. algorithms read resumes from job applicants, decide who gets a parking ticket, or rule on whether someone goes to prison, says Jason Morton, an associate professor of mathematics at Penn State. As the stakes rise, so do the incentives for people to muck about, influence, or even replace A.I. algorithms with ones that better serve their interests. Laypeople will need to have confidence in their medical diagnoses, and believe that parking tickets or—in the most dystopian sense—prison sentences have been justly given out.
This is partly why Morton, who is on leave to build his own startup, and Kang have dedicated their time and research to combining zero-knowledge proofs with machine learning. “It’s a way to prove to everyone that the model that they think is running is the one that’s running,” Morton told Fortune.
‘Proprietary secret sauce’
Just as A.I. has attracted untold buzz, zero-knowledge proofs have generated their fair share of chatter in the world of crypto. “People inside this space are extremely excited about it,” Morton said.
First articulated by researchers in 1985, the cryptographic technique has two main advantages: privacy and “succinctness,” or the ability to prove something true without the need to parse each and every statement. An auditor can, for example, quickly verify someone correctly submitted a tax return without seeing data from the return.
Cryptocurrencies like Zcash and Monero have used zero-knowledge proofs to cloak users’ transactions. More recently, programmers have used succinct proofs—of which zero-knowledge proofs are a subset—for scalability.
Blockchains like Ethereum are slow, decentralized computers. The larger a program, the longer it takes for a blockchain to run it. To solve this, developers run programs off-chain, or on private servers, through a succinct proof. Then, they move the proof onto a blockchain, where one needs to only solve the proof to verify that the code was executed correctly, not check each and every line of the program.
Now, startups are pushing to use zero-knowledge proofs for machine learning, a fusion they call zkML.
Through zero-knowledge proofs, they say, outside observers can verify that companies or developers used a promised A.I. algorithm. For example, OpenAI, the juggernaut that developed ChatGPT, can prove that its chatbot wrote a poem without revealing the algorithm’s “weights,” or what an A.I. model learns after training on copious amounts of data.
Twitter open-sourced their recommendation algorithm, but the weights remain hidden! How can we trust it? We’ll show how to verify the Twitter algorithm with zkml!
???? Blog post: https://t.co/Sjhkl7fjjK
GitHub: https://t.co/DwtT9uidFB
1/6 pic.twitter.com/RUmRRYzusR— Daniel Kang (@daniel_d_kang) April 17, 2023
Kang, the Illinois acemic, recently demonstrated the above when he created a zero-knowledge proof to verify Twitter’s A.I-powered recommendation algorithm, which ranks the tweets that appear in a user’s timeline. While Elon Musk released the code behind the ranking algorithm, the embattled CEO of the social media site and Tesla didn’t release the weights of the A.I. model that powers it. “Twitter has a lot of really good reasons not to release the weights,” Kang told Fortune. “There’s first a proprietary secret sauce, but it also contains a lot of private data.”
But with a zero-knowledge proof, Kang can prove that Twitter’s ranking algorithm executed without manipulation. “Even today, companies can—and do—lie, and auditors come under fire for this,” he said. “But with the case of zero-knowledge proofs, the auditors can actually tell with certainty that Twitter, for example, did the right thing.”
Immediate applications
Kang’s job as a researcher, he said, is to “build technology that will be impactful five to 10 years from now.” But there are entrepreneurs who believe they can build zkML-based businesses now.
One of the more immediate applications is scalability. It’s too expensive to run A.I. models on decentralized computing platforms like Ethereum, so developers have to run them off-chain, which runs counter to crypto’s ethos of transparency. To convince users they’ve used the correct A.I. model, developers can prove that they ran the right algorithm with zero-knowledge proofs.
Modulus Labs, for example, is working with A.I. Arena, a video game where A.I. fighters learn from human players. Humans pit the fighters against each other, with the victor winning crypto. Because of the financial stakes, players need to trust that their opponents haven’t unfairly manipulated or influenced the fighters. To give users this confidence, Modulus Labs is developing custom zero-knowledge proofs to verify that the A.I. fighter trained by a player is the same one deployed in a bout.
Venture capitalists are taking notice. “We’re definitely interested in investing in this category, because we think it’s going to be very important,” Ali Yahya, a general partner at the venture capital firm a16z crypto, told Fortune. (He specified, however, that as an investor, he was focused on zkML’s application in crypto, not beyond.)
Worldcoin, an initiative founded by OpenAI’s Sam Altman, is also an influential player in the nascent space. The project, which aims to convince billions to scan their irises to prove they’re not robots, uses zero-knowledge proofs with machine learning as part of its iris-scanning technology. One of its developers has organized a Telegram group dedicated to the field, and its associated foundation intends to issue grants to zkML projects and researchers. “This is a huge research area for us, one that’s very promising,” Steven Smith, head of protocol at Tools for Humanity, the company developing Worldcoin, told Fortune.
And even smaller firms are also sniffing around zkML. “Some point in the next—whether it’s today, six months, four months—is probably the time for a deep tech investor to take an early position,” Tom Walton-Pocock, founder of deep tech VC firm Geometry, told Fortune.
Whether zero-knowledge proofs become a daily part of our A.I.-verification diet has a score of logistical hurdles, including the scale of computing power needed to generate proofs for complex A.I. algorithms and the mechanics of how the proofs trickle down to a layperson, who may be browsing Twitter or Bluesky, or whichever social media platform first implements the technology. Moreover, Kang, says that they’re not a panacea and A.I. accountability is a “multi-prong[ed] approach.”
But for Shorr and Cao, who work out of a Silicon Valley WeWork with whiteboards covered with equations, Post-it notes, and diagrams, they’re taking a calculated risk.
“We’re big believers,” Shorr told Fortune, “that ultimately this tech will be beyond just crypto itself.”