Qualcomm

Qualcomm Outperforms Nvidia in AI Chip Testing

Qualcomm artificial intelligence chips outperformed those of rival Nvidia in two of three power efficiency metrics, test results have shown.

Nvidia is the market leader for training AI models with vast quantities of data. After these AI models have been trained, however, they are put to widespread use in a process known as “inference” performing tasks such as generating text responses to prompts and determining whether an image contains a cat.

Analysts predict that the market for data center inference processors will expand rapidly as businesses integrate AI technologies into their products. However, companies such as Google are already investigating ways to limit the costs that this will incur.

One of these major costs is electricity, and Qualcomm has leveraged its experience designing chips for battery-powered devices such as smartphones to develop a chip called the Cloud AI 100 that aims to conserve energy.

In data published on Wednesday by MLCommons, an engineering consortium that maintains testing benchmarks widely used in the AI chip industry, Qualcomm’s AI 100 chip outperformed Nvidia’s flagship H100 chip at classifying images, based on the number of data center server queries each chip can execute per watt.

Qualcomm processors achieve 227.4 server queries per watt compared to Nvidia’s 108.4 server queries per watt.

Qualcomm also outperformed Nvidia in object detection, scoring 3.8 queries per watt compared to Nvidia’s 2.4 queries per watt. Object detection can be used in applications such as analyzing retail store surveillance footage to determine the most frequented areas.

Nvidia, on the other hand, ranked first in terms of both absolute performance and battery efficiency in a test of natural language processing, the AI technology most commonly used in chatbots. Qualcomm ranked second with 8.9 queries per watt, behind Nvidia’s 10.8 queries per watt.

Share this post