This post was originally published on this site
https://i-invdn-com.investing.com/news/LYNXMPEB1A16P_M.jpgThe firm explained that Naver said it will use Intel Sapphire Rapids and AI accelerators for its Naver Place AI (store and restaurant search/recommendation service) in the process of Inference.
However, Bernstein believes the move is not a game changer yet. “By far, H100 of NVIDIA is the best performing AI chip in the world, ~1.5x-4.5x better than their previous generation (A100) on inference, and better on available inference tests against accelerators from Google, Qualcomm (NASDAQ:QCOM), HabanaLabs,” said the Wall Street firm.
“We still believe Switching GPU to CPU is not an ideal option because it is super costly for AI tasks, especially LLM model training. Despite the fact that NVIDIA H100 is much more expensive on chip basis, the total cost for training and inference is still lower regardless of its production and supply issue,” the analysts added.
The firm said it does not believe Naver can replace its GPU-required LLM model training/tuning with the Intel CPU in the medium to long term.
“In conclusion, we think the collaboration between Naver and Intel is doable for very narrow use cases, or inference function only, or low enough parameters; but definitely not the game changer yet,” argues Bernstein.