Qualcomm revealed on Monday its plans to launch new artificial intelligence accelerator chips, intensifying the competition with Nvidia. The stock surged by 11% following this announcement.
This marks a significant shift for Qualcomm, traditionally known for mobile and wireless semiconductors, not AI chips for large-scale data centers. The new chips, the AI200 and AI250, are set for release in 2026 and 2027, respectively, with systems that can fill entire liquid-cooled server racks.
Qualcomm is now directly competing with Nvidia and AMD, who have long offered GPUs in full-rack configurations, which are essential for AI labs running complex models. Qualcomm’s AI chips are derived from the company’s Hexagon neural processing units (NPUs) found in smartphone processors.
Durga Malladi, Qualcomm’s data center general manager, shared that the company first focused on building strength in other areas before advancing to data center technologies. He emphasized that the company’s success in other domains made the move into data centers seamless.
This new venture marks Qualcomm’s entry into one of the fastest-growing technology sectors: data centers designed for AI. McKinsey estimates that nearly $6.7 trillion will be spent on data centers through 2030, with AI-focused systems driving the majority of this spending.
Nvidia currently dominates the AI accelerator market, holding over 90% of the share and driving its market value beyond $4.5 trillion. Its GPUs were essential in training OpenAI’s GPT models, including the one behind ChatGPT.
However, companies like OpenAI have started exploring alternatives. OpenAI recently announced it would purchase chips from AMD and potentially invest in the company. Other tech giants like Google, Amazon, and Microsoft are also developing in-house AI accelerators for their cloud services.
Qualcomm’s chips are designed specifically for inference tasks, which run AI models, rather than training them. This contrasts with how companies like OpenAI create new AI capabilities, which involves massive data processing.
Qualcomm claims that its rack-scale systems will be more cost-efficient for customers like cloud providers. A typical rack will use 160 kilowatts of power, similar to the power consumption of some Nvidia GPU racks.
In addition to full systems, Qualcomm will also offer its AI chips and other components individually. This flexibility allows hyperscalers, who prefer to design their own racks, to mix and match parts as needed. Qualcomm also hinted that other AI chip companies like Nvidia and AMD could become clients for its components, such as CPUs.
Although Qualcomm declined to comment on pricing or the number of NPUs per rack, the company announced a May partnership with Humain in Saudi Arabia to supply AI inferencing chips to local data centers. Humain is committed to deploying systems with up to 200 megawatts of power.
Qualcomm’s AI chips offer competitive advantages in power efficiency, cost, and memory handling. With support for 768 gigabytes of memory, Qualcomm’s cards surpass those from Nvidia and AMD.
