NVIDIA continues to flex its technical muscle in Artificial Intelligence (AI) to seize new opportunities in the fast-growing chipset market. Long known as the powerhouse of AI “training” solutions, the company has recently been pushing into the adjacent—and potentially much larger—market for AI “inferencing” products. Training is a data-intensive process necessary for preparing machine learning, deep learning, and other artificial intelligence models for production applications. Training an AI model ensures that it can perform its designated inferencing task—such as recognizing faces or understanding human speech—accurately and in an automated fashion.
NVIDIA Shifting Focus Toward AI Inferencing in Edge Applications
Traditionally, machine learning, deep learning and other AI models are trained in clouds, server clusters and other high-performance computing environments. Though some industry observers believe NVIDIA’s technology was designed only for AI training, its solutions have also been optimized for high-speed AI inferencing, but primarily in cloud, data center, and server platforms. Until recent months, there has been market uncertainty regarding whether NVIDIA’s inferencing capabilities would be up to the challenges and opportunities for this technology deployed in mobile, embedded, robotics, and other edge environments.
Going forward, inferencing is the dominant segment of the AI opportunity, and that fact should be noted if you’re invested in NVIDIA. McKinsey has predicted that the opportunity for AI inferencing hardware alone in the data center will be 2x that for AI training hardware by 2025 ($9-10B, vs. $4-5B), and, in edge device deployments, it will be 3x larger for inferencing compared to training by that same year. Allied Market Research recently released a study showing that the AI chip market is currently valued at around $7 billion, while forecasting it to grow to $90 billion by 2025. However, one sizes the AI-accelerator hardware opportunity, the demand pull through for ancillary software solutions, including development tools and algorithm libraries, will be commensurately larger for AI inferencing compared to AI training use cases.
NVIDIA’s GPU Tech is a Both a Competitive Asset and a Hindrance in AI Inferencing Market
NVIDIA’s most pressing competitive vulnerability lies in the fact that its core chipset technology, the graphical processing unit (GPU), has been optimized primarily for high-volume, high-speed training of AI models, though it is used for inferencing in most server-based machine learning applications as well. The GPU is also a significant competitive asset for NVIDIA in the AI wars, because it is the predominant chip architecture used for both training and inferencing in most server- and cloud-based applications of machine learning, deep learning, and natural language processing.
Indeed, Liftr Cloud Insights has estimated that the top four clouds in May 2019 deployed NVIDIA GPUs in 97.4 percent of their infrastructure-as-a-service compute instance types with dedicated accelerators. For recent indicators that NVIDIA is playing this advantage to the utmost, the company recently announced high-profile partnerships that allow it to address growing opportunities serving enterprises that want to run AI workloads on GPU servers in industry-specific, hybrid, and virtualized cloud-to-edge computing environments.
Nevertheless, NVIDIA recognizes that the much larger opportunity resides in inferencing chips and other components optimized for deployment in edge devices. The company has its work cut out for it. Various non-GPU technologies—including CPUs, ASICs, FPGAs, and various neural network processing units—have performance, cost, and power efficiency advantages over GPUs in many edge-based inferencing scenarios, such as autonomous vehicles and robotics.
Indeed, CPUs currently dominate edge-based inferencing, while NVIDIA’s GPUs are not well-suited for commodity inferencing in mobile, Internet of Things, and other mass-market use cases. McKinsey projects that CPUs will account for 50 percent of AI inferencing demand in 2025 with ASICs at 40 percent and GPUs and other architectures picking up the rest.
In edge-based inferencing, there is no one hardware/software vendor that is expected to dominate. In edge-based AI inferencing hardware alone, NVIDIA faces competition from dozens of vendors that either now provide or are developing AI inferencing hardware accelerators. NVIDIA’s direct rivals—who are backing diverse AI inferencing chipset technologies—include hyperscale cloud providers such as Amazon Web Services, Microsoft, Google, Alibaba, and IBM; consumer cloud providers such as Apple, Facebook, and Baidu; semiconductor manufacturers such as Intel, AMD, Arm, Samsung, Xilinx, and LG; and a staggering number of China-based startups.
Even in its core AI market stronghold, which is data center-based training, NVIDIA has been facing escalating competition. Though NVIDIA is still by far the dominant GPU supplier in the AI market, it has seen its competitive advantage wane as AMD and, just recently, Intel offer rivals GPU offerings for AI, gaming, and other markets.
NVIDIA’s Recent Product Announcements Poise it for AI Inferencing Accelerator Opportunities
Concerns aside, NVIDIA’s sophisticated R&D is paying off in the edge inferencing market, which bodes well for its ability to achieve significant adoption in this hotly competitive growth segment.
One notable recent milestone in NVIDIA’s favor was the recent release of AI industry benchmarks that show its technology setting new records in both training and inferencing performance. MLPerf has become the de facto standard benchmark for AI training and, with the new MLPerf Inference 0.5 benchmark, for inferencing from cloud to edge. NVIDIA’s recent achievement of the fastest results on a wide range of MLPerf inferencing benchmarks is no mean feat. Coming on its equally dominant results on MLPerf training benchmarks, it’s also no big surprise. As attested by avid customer adoption and testimonials, the vendor’s entire AI hardware/software stack has been engineered for the highest performance in all AI workloads in all deployment modes. These stellar benchmark results are just further proof points for NVIDIA’s laser focus on low cost and high-performance AI platforms.
Another significant milestone for NVIDIA in the inferencing market was its announcement of its forthcoming Jetson Xavier NX module. Due for general availability in March 2020, this new AI-optimized hardware modular offers server-class performance, a small footprint. low cost, low power, high performance, and flexible deployment. These features suit Jetson Xavier NX, both for AI inferencing applications at the edge and in the data center. Just as important to broad adoption is its ecosystem-readiness. It’s both pin-compatible with the existing Jetson Nano hardware platform and also supportsAI models built in all major frameworks, including TensorFlow, PyTorch, MxNet, Caffe and others.
In the AI Wars, NVIDIA Will be the Supplier to Beat — for Awhile
In the AI wars, I believe NVIDIA will still be supplier to beat for at least the next two years, and not only because it offers the predominant hardware accelerator technology for core server-based training and inferencing workloads. It’s also due to the fact that NVIDIA’s CUDA library, APIs, and ancillary software offerings are widely used—on a global basis—for the widest range of AI development and operations challenges.
We have every confidence that NVIDIA will remain a blue-chip provider of vertically integrated hardware and software for most mass-market AI opportunities, including the coming era of ubiquitous edge-based AI deployments. With an annual revenue run rate nearing $12 billion, NVIDIA retains a formidable lead over other AI-accelerator chip manufacturers, especially Intel and AMD, and the wide range of cloud, analytics, and development tool vendors who have flocked into the AI space over the past several years to address substantial demand growth.
Investors’ perception of NVIDIA will shift toward edge-oriented growth opportunities as soon as Jetson Xavier NX comes to market and, we are confident, achieves broad adoption as an embedded inferencing module in edge devices for every application. Enterprise buyers will shift their perception of NVIDIA toward edge-inferencing projects as soon as they evaluate the performance of Jetson Xavier in competitive bake-offs against rival AI accelerators.
Futurum Research provides industry research and analysis. These columns are for educational purposes only and should not be considered in any way investment advice.
Read more analysis from Futurum Research:
Image Credit: PC Builders Club
The original version of this article was first published on Futurum Research.
Latest posts by Daniel Newman (see all)
- Nvidia Breaks Revenue Records in Data Center as AI Demand Surges - February 17, 2020
- Cisco Q2 Results Show Resilience Despite Challenging External Environment - February 13, 2020
- Samsung Unpacked Reveals Next Wave of Mobile Innovation - February 12, 2020