Intel Vs. Nvidia: The Easy Path For Intel – Intel Corporation (NASDAQ:INTC)


Image result for Google TPU

Google’s TPU. Soon, an equivalent will be found inside Intel chips.

Tomorrow, Intel (NASDAQ:INTC) will announce its plans regarding how it’s going to fight off Advanced Micro Devices (NASDAQ:AMD) and Nvidia (NASDAQ:NVDA) in the server room.

A good while ago, I had already postulated that GPU compute was going to take share in the server room, and Nvidia was the main competitor bringing such a threat. In the meantime, we can say that “GPU compute” is being dominated by AI applications, where GPUs are used to speed up neural network training and inference. Nvidia dominates the field there.

So tomorrow, Intel will announce something to fight back against Nvidia. What will that be? I think this is both easy to guess and easy for it to implement. Intel will basically include a logic block dedicated to accelerating NNs (Neural Networks). This is the same as what others like Nvidia have also done (by including tensor cores within its Tesla P100, for instance), or the same as Google’s (NASDAQ:GOOG) TPU. Or the same as what Tesla is planning to unveil next year.

With little chip-making expertise, Google managed to bring its TPU (Tensor Processing Unit) to production in a mere 15 months. Intel can do it faster. And when it comes to beating GPUs at NN acceleration, this step is obvious. Moreover, while the GPU part of Nvidia’s P100 is rather “optional” for someone wanting to run mostly NN acceleration, the same can’t be said for the CPU part, which always needs to be present. As a result, including NN acceleration within a CPU is even more obvious than doing so on a GPU.

The benefits of this obvious approach are also obvious. An NN logic bloc is massively more efficient for accelerating NNs than a CPU or GPU. For instance, from an in-depth look at Google’s first-generation TPU, we have this particularly relevant chart:

tpu-3gpcs.PNG

Hence, there’s little doubt what Intel will announce tomorrow when it comes to fighting back versus Nvidia in the server room. The company will announce the inclusion of NN “tensor cores” within some of its server room-bound CPUs.

Intel faces a great many threats, from AMD to Nvidia to Apple (NASDAQ:AAPL) dropping its chips in a couple of years. As a result, it’s difficult to say if the company should react all that positively to these news.

However, it’s pretty clear who stands to lose ground from this obvious development: Nvidia. This is so since the inclusion of these logic blocks is straightforward, and the benefits versus having a large number of GPUs are obvious. I expect products using them to materialize quickly – within 1 year or so, at most.

Conclusion

Tomorrow, Intel will announce the inclusion of a “tensor cores” logic block for accelerating NNs within some of its server room chips. Nvidia is the prime target of these moves. I believe this move by Intel could have a visible impact on Nvidia’s prospects in the server room.

I also believe AMD will ultimately do the same (include similar logic blocks), thus compounding the problem for Nvidia.

Disclosure: I am/we are short NVDA.

I wrote this article myself, and it expresses my own opinions. I am not receiving compensation for it (other than from Seeking Alpha). I have no business relationship with any company whose stock is mentioned in this article.

Additional disclosure: The “short Idea” is for NVDA only.





Source link

Intel