This post was originally published on this site
Amid the torrent of announcements at its GTC developer conference Tuesday, Nvidia
NVDA,
added more capabilities to its collaborative engineering and modeling metaverse platform. That includes the ability to build “digital twins” of cars, robots and other real-world projects to help speed development.
The metaverse is an emerging, uber-hot concept that brings together capabilities like augmented reality, virtual reality, collaboration tools and the cloud to blend the real and virtual worlds, with myriad, potentially ground-shaking benefits for work and play. With the technology and some purpose-built equipment, for example, a prospective car buyer could see and test-drive a full showroom worth of models at a single station.
Read: Nvidia’s stock is at a stratospheric high — these five catalysts could propel it higher
In recent months, many high-tech juggernauts have been showcasing metaverse plans and prowess at their own events. But at Big Tech shows, where executives often blend product announcements with pie-in-the-sky pronouncements, it can sometimes be difficult to separate the road-ready vehicles from the concept cars. Late last month, for example, Facebook — now Meta Platforms
FB,
— presented a wide-sweeping, all-metaverse-all-the-time vision that was concept-rich but virtually detail-free. Despite that, it may have helped drive the latest investment frenzy in companies with early stakes in the metaverse.
Contrast that with Microsoft
MSFT,
which last week laid out a far more focused and realistic metaverse vision that centered on Microsoft Teams, the company’s business-collaboration platform. The first components are due out early next year. One of my favorites: a virtual avatar for videoconferences, complete with facial expressions. How great to have a third option for attending early-morning online working sessions, a welcome addition to today’s unpalatable options: appear bleary-eyed and unshaven on camera, or risk putting off clients and coworkers by disabling video.
Nvidia’s metaverse play
Nvidia’s platform — called Omniverse — is an unusual hybrid in that it both enables metaverse-style collaboration, as well as provides integrated tools to help others further their own metaverse development. And it’s arguably the most focused and fleshed-out concept in this fledgling arena.
Omniverse was announced nearly three years ago, and has been available to engineers as an open beta since last fall. All along, Nvidia has been adding new features and capabilities that companies like BMW, DeepMind and Siemens are already putting to work to automate factories, maintain equipment — even decode human proteins to help develop new drugs.
At the virtual event Tuesday, Nvidia unveiled two new core capabilities. The first, Nvidia Omniverse Avatar, helps deliver expressive, multilingual helpers like concierges, desk clerks and order takers. And the second, Nvidia Omniverse Replicator, gives developers the ability to manipulate digital twins in virtual worlds, a potentially powerful tool for streamlining AI development.
Why Nvidia?
At first blush, it might seem counterintuitive for a chipmaker to lead the metaverse charge. Certainly, semiconductors — in particular, 3D graphics accelerators, CPUs and communications processors — will play an outsized role in enabling the metaverse. But the benefits to Nvidia — itself a leading supplier in two of those markets, with aspirations in the third — extend far beyond just chip sales.
Nvidia CEO Jensen Huang opened his GTC keynote with a nod to CUDA, the proprietary platform that helped his company build a high-performance computing and AI empire, as well as insulate it from competition.
“The number of developers that use Nvidia has grown to nearly three million,” Huang said. “CUDA has been downloaded 30 million times over the past 15 years, and seven million last year alone.”
True enough. But lately, new open-source programming languages like SYCL and higher-level frameworks like TensorFlow are breaking down CUDA’s lock on design at a time when new entrants — from established players like Intel
INTC,
and Xilinx
XLNX,
which AMD
AMD,
is buying, to startups like GraphCore and Cerebras Systems — are coming to market.
Nvidia isn’t on the ropes by any stretch. It is a tech darling of Wall Street that has been powering the AI revolution, with revenues growing at startup speed — and earnings blossoming even faster. For the quarter ended in July, the company had revenues of $6.51 billion, up 68% from the same quarter last year, and net income of $2.37 billion, up 282%. The company is scheduled to announce its fiscal third quarter results next week.
By the same token, it will become progressively more difficult to keep developers in the CUDA fold, which works only with Nvidia GPUs, when they can choose between Nvidia and the others on a case-by-case basis by leveraging open-source tools. Which means Nvidia may have to work harder to keep AI designs.
Nvidia is no stranger to competing on those terms. In the PC graphics arena, where design wins are more fleeting because development is dominated by open-source OpenGL and Windows DirectX, the company battles fiercely with AMD. And typically, the one with the best hardware at the time reaps the benefits.
Enter … Omniverse
With that as a backdrop, Omniverse could emerge as a critical linchpin in Nvidia’s efforts to keep developers from straying. It is, of course, a modern, next-generation platform for development and collaboration. More than that, though, it fights the open-source fire it’s facing in AI development with an open-source twist of its own.
That’s made possible by Universal Scene Description, an open-source 3D tool developed by Pixar that lets developers mix and match models from different platforms, like Adobe
ADBE,
and Autodesk
ADSK,
That will do a world of good for developers who want to share and build on each other’s efforts, but don’t work with the same tools.
“Omniverse can connect design worlds,” Huang said. “(It) will revolutionize how the 40 million 3D designers in the world collaborate.”
And the metaverse?
The metaverse has the potential to be disruptive. But invariably, technology disruptions take far longer to pop than anyone expects. And the metaverse will be no exception. AR and VR are still immature, collaboration is just now taking off, courtesy of Covid-19. And the cloud does not yet have what it takes to supply the compute, graphics processing and communications that the metaverse will require.
To be sure, big, disruptive ideas that leverage big, disruptive technology don’t come before the technology has a chance to develop, mature and deploy. In the interim, we’ll invariably see lots of half-steps — and countless missteps — along the way.
Remember, no one envisioned the Netflix streaming video juggernaut during the ’90s internet boom. Netflix may not even have seen it coming. The company started out in 1998 with an online store, but it only used the site to rent DVDs. It didn’t start streaming video until 2007.
Companies like Microsoft have already got a bead on half-steps. And investors, hungry for a piece of the next Facebook, Amazon
AMZN,
Apple
AAPL,
or Alphabet
GOOG,
will no doubt pour billions into missteps. And for the moment, at least, Nvidia’s in a good place, stoking early metaverse development — and early investment — with its own metaverse: Omniverse.
Mike Feibus is president and principal analyst of FeibusTech, a market research and consulting firm. Reach him at mikef@feibustech.com. Follow him on Twitter @MikeFeibus. He does not directly own shares of any companies mentioned in this column.