Outside the Box: The digital road to AI riches will be built by Nvidia, Broadcom and Marvell

This post was originally published on this site

Just as the internet needed new plumbing to function when it was developed in the 1990s, the proliferation of artificial intelligence will require a gaggle of switches, connectors and transceivers that can move a gargantuan amount of information.

Nvidia
NVDA,
+3.63%
,
Broadcom
AVGO,
+0.56%

and Marvell Technology
MRVL,
+2.42%

are among the biggest companies building this digital transportation system, which is critical in realizing the potential of AI. Global spending on AI will top $150 billion this year, an increase of 27% from 2022, according to a forecast from International Data Corp. That growth rate will continue through 2026, when spending will exceed $300 billion.

There’s a mother lode of money to be made beyond the GPU gold rush, which has enraptured investors. Ultra-fast chips are the vehicles by which AI can process and churn out language and images, and Nvidia is, so far, easily winning that race.

But the digital road is another story, and a competition is forming that’s likely to present a huge payoff for investors. Nvidia has the most to gain, as it sells products in almost every AI-related market for hardware and software. Oft-overlooked Broadcom, which owes much of its success to serial acquisitions, is now pouring money into research and development. Marvell, whose stock market value is just 5% of Nvidia’s, is the leader in lightning-fast optical digital signal processors (DSPs), which are quickly becoming the industry standard.

One potential obstacle to growth for these and other AI-related companies is that the U.S. is considering tougher restrictions on the export of high-powered chips to China, a prospect that could dent sales in one of the largest markets in the world.

The move comes as the Biden administration is seeking to contain China’s AI ambitions. Nvidia has the most at stake, as it controls 80% of the ultra-high-power semiconductor market.

Still, CFO Colette Kress said in a webinar there would be no significant immediate financial impact because of strong demand from across the world. Nvidia and other chipmakers weathered previous controls put in place in October 2022 by designing new chips for the Chinese market. Furthermore, the near-term impact is likely lessened by the immense AI demand that is fueling growth across the semiconductor industry.

Read: China wants to tightly control AI at home. AI has other plans.

Here’s a deep dive into the three companies and their prospects as AI is poised to dominate tech trends over the coming decade:

1. Nvidia

The Santa Clara, Calif.-based company shocked the technology and investing markets when it said May 24 that revenue would jump 53% in the three months through July from the previous quarter, or to $11 billion from a little over $7 billion.

The speed at which AI is coursing through the corporate world was catalyzed in November by OpenAI’s ChatGPT, a bot that has near-perfect language ability. Since then, company tech budgets have been realigned. That’s why 2023 will be remembered as the year that artificial intelligence gained liftoff.

Nvidia’s graphics-processing units are the fastest on the market, giving the company a 90% market share in AI-enabled chips. OpenAI, which is in effect controlled by Microsoft Corp.
MSFT,
+1.64%
,
has been among its biggest customers. Companies including Advanced Micro Devices Inc.
AMD,
+2.40%

are attempting to catch up. 

But Nvidia has spread its bets among other engineering endeavors. Founder and CEO Jensen Huang said in May that $1 trillion in data center infrastructure — built by the likes of Alphabet
GOOG,
+0.80%

), Microsoft and Meta Platforms
META,
+1.94%

— will need to transition to what he called accelerated computing to handle generative AI purposes. That’s where the plumbing comes in.

Compared with Broadcom and Marvell, Nvidia has the broadest array of networking hardware and software in high-end computing systems. The high-bandwidth and low-latency requirements for networking terabytes of data for training AI and machine learning (ML) systems are prompting data center companies to upgrade cables, transceivers and switches to handle the load.

Nvidia’s advantage is a family of purpose-built products, including its latest, the SN5000 ethernet switch portfolio for deep-learning workloads that can connect GPU compute at speeds of as much as 800 gigabits per second (Gb/s). To put that in perspective, that’s 8,000 times faster than typical fiber internet speeds.

Nvidia made its biggest leap into networking in 2020 with the $6.9 billion acquisition of Mellanox, a company known for InfiniBand switches, which optimize data center connectivity.  As Huang said in an interview at the time: “When a problem is greater than one computer, then the network becomes the problem.”

2. Broadcom

Advanced Micro Devices is considered Nvidia’s biggest competitor in chips. But Broadcom is its largest rival in switching and routing gear favored by the so-called hyperscalers.

The San Jose, Calif.-based company recently redesigned its Tomahawk and Jericho family of gear, which is used in data centers and telecom platforms, to take on AI workloads. As a result, customers comparison-shop Broadcom’s offerings with Nvidia’s Spectrum Ethernet and Quantum InfiniBand switches.

In data centers, GPUs used for generative AI run in parallel to mine massive datasets. That requires efficient networks. Broadcom CEO Hock Tan said on the company’s most recent earnings call that it’s developing new ethernet switches that are dedicated to such workloads. Broadcom’s updated Tomahawk switch chips are expected to be available in 2024, doubling the capacity of the previous version with the same power efficiency.

Still, Broadcom’s strategy has differed from both Nvidia and Marvell in that it has purchased dozens of companies in its 32 years. Instead of grouping employees around a new technology to develop, Broadcom has entered new markets via acquisitions. Some larger purchases include Symantec, a security company that protects data in the cloud, and CA Technologies, formerly Computer Associates, at one time the second-largest software company in the world after Microsoft. The company is also aggressively working toward closing its $60+ billion acquisition of VMware — I had the chance to discuss this with Hock Tan at our recent Six Five Summit, which can be watched here.

As a result, Broadcom’s market value has ballooned to $358 billion, making it the second-biggest U.S.-based semiconductor company after Nvidia. Tan said the company has the scale to become a big player in AI, given outsized demand so far this year. I expect Broadcom to further lean into the opportunity to drive networking for AI workloads–especially as AMD, Intel and others seek to give NVIDIA competition. 

For example, Broadcom’s ethernet switching in AI was $200 million in 2022, with an “exponential demand from hyperscalers” pointing to a quadrupling of those sales this year, Tan said on the call. Total revenue in its most recent quarter was $8.7 billion.

3. Marvell Technology

Marvell Technology, based in Santa Clara, Calif., is the smallest and youngest company on this list of AI plumbing companies. It’s also unprofitable, with the company projecting to breakeven within two years.

That said, Marvell is considered the leader in optical digital signal processors (DSPs), which connect AI clusters used for computing. The company recently released Nova PAM4, an electro-optics platform that enables the highest speed of data movement in cloud artificial intelligence, machine learning and data center networks. It effectively doubled the bandwidth of any previous product.

Breakthroughs like that will be ever more important as the size of large training models grows 10-fold every year for the foreseeable future.

Optical interconnect technologies will be crucial for AI because of their advantages over copper wire, which has been used for 200 years. For one, information travels faster — at the speed of light. That reduces latency. Second, the number of channels in a single optical waveguide can be increased dozens of times, creating more packets of information. Third, there’s less interference. Finally, energy consumption is lower. (Training GPT-3 took 1.287 gigawatt hours, or as much electricity as 120 U.S. homes would consume in a year.)

CEO Matt Murphy said on the company’s most recent earnings call in May that AI-specific revenue will double every year from fiscal years 2023 to 2025, when it will total $800 million. In this recent Six Five Summit conversation with Marvell COO Chris Koopmans, we broke down the AI opportunity and he explained the company’s AI strategy in depth. Total revenue was $5.8 billion in its most recent fiscal year. Outside the pandemic years, overall revenue growth tended toward 12%.  

Daniel Newman is the CEO and chief analyst at The Futurum Group, which provides or has provided research, analysis, advising or consulting to ServiceNow, NVIDIA, Microsoft, Amazon, IBM, Oracle and other technology companies. Neither he nor the firm have any positions in any of the other companies mentioned. Follow him on Twitter @danielnewmanUV.

More: Nvidia and these 6 tech giants will gobble up the AI pie

Also read: Nvidia has investors wondering: How long can a stock grow faster than the market?