TLDR OpenAI has been searching for alternatives to Nvidia chips since last year, targeting solutions for approximately 10% of its inference computing requirementsTLDR OpenAI has been searching for alternatives to Nvidia chips since last year, targeting solutions for approximately 10% of its inference computing requirements

Why OpenAI is Looking Beyond Nvidia for AI Chip Solutions

3 min read

TLDR

  • OpenAI has been searching for alternatives to Nvidia chips since last year, targeting solutions for approximately 10% of its inference computing requirements
  • A planned $100 billion Nvidia investment in OpenAI has been delayed for months beyond its expected closure timeline
  • The ChatGPT developer has signed chip supply agreements with AMD, Broadcom, and Cerebras Systems to diversify its hardware sources
  • Nvidia purchased Groq’s technology for $20 billion and recruited its chip design team to enhance inference capabilities
  • Sam Altman and Jensen Huang both publicly minimized tensions between the two AI industry leaders

OpenAI has been pursuing alternatives to certain Nvidia chips for over a year. The search focuses on hardware better suited for inference operations, where AI models generate responses to user requests.

The company requires faster processing for specific use cases. Software development and inter-AI communication are among the applications needing improved performance.

OpenAI plans to source alternative chips for roughly 10% of its future inference requirements. Multiple sources confirmed the company’s concerns about Nvidia’s current hardware speeds for particular tasks.

Investment Agreement Faces Delays

Nvidia revealed plans last September to invest up to $100 billion in OpenAI. The agreement was projected to finalize within weeks but remains incomplete months later.

OpenAI’s evolving product strategy has modified its computing requirements. These changes have extended the negotiation timeline with Nvidia.

The company has secured chip agreements with AMD, Broadcom, and Cerebras Systems during this period. These vendors offer hardware designed to compete with Nvidia’s products.

Performance issues surfaced in OpenAI’s Codex code generation tool. Team members linked some of Codex’s limitations to Nvidia’s GPU architecture.

Sam Altman stated on January 30 that coding customers prioritize speed. He confirmed OpenAI would address this through its Cerebras partnership.

Hardware Specifications

OpenAI has targeted companies producing chips with substantial SRAM memory. This memory type is integrated directly into the chip’s silicon.

The design provides speed benefits for chatbots handling millions of user interactions. Inference operations demand more memory than training because chips retrieve data more frequently.

Nvidia and AMD GPUs use external memory configurations. This approach increases processing delays and reduces chatbot response times.

Competitors like Anthropic’s Claude and Google’s Gemini employ different technology. They utilize Google’s tensor processing units optimized for inference computations.

OpenAI explored partnerships with Cerebras and Groq for enhanced inference chips. Nvidia’s $20 billion licensing agreement with Groq terminated OpenAI’s discussions with that company.

Tech Licensing and Acquisitions

Nvidia recruited Groq’s chip design staff alongside the licensing deal. Groq had attracted investor interest at a $14 billion valuation while negotiating with OpenAI.

Nvidia issued a statement saying customers select its chips for inference performance and cost efficiency. The company described Groq’s technology as complementary to its roadmap.

An OpenAI representative confirmed Nvidia powers most of the company’s inference infrastructure. The statement emphasized Nvidia’s performance-per-dollar leadership for inference operations.

The post Why OpenAI is Looking Beyond Nvidia for AI Chip Solutions appeared first on Blockonomi.

Disclaimer: The articles reposted on this site are sourced from public platforms and are provided for informational purposes only. They do not necessarily reflect the views of MEXC. All rights remain with the original authors. If you believe any content infringes on third-party rights, please contact [email protected] for removal. MEXC makes no guarantees regarding the accuracy, completeness, or timeliness of the content and is not responsible for any actions taken based on the information provided. The content does not constitute financial, legal, or other professional advice, nor should it be considered a recommendation or endorsement by MEXC.
Tags:

You May Also Like

Cashing In On University Patents Means Giving Up On Our Innovation Future

Cashing In On University Patents Means Giving Up On Our Innovation Future

The post Cashing In On University Patents Means Giving Up On Our Innovation Future appeared on BitcoinEthereumNews.com. “It’s a raid on American innovation that would deliver pennies to the Treasury while kneecapping the very engine of our economic and medical progress,” writes Pipes. Getty Images Washington is addicted to taxing success. Now, Commerce Secretary Howard Lutnick is floating a plan to skim half the patent earnings from inventions developed at universities with federal funding. It’s being sold as a way to shore up programs like Social Security. In reality, it’s a raid on American innovation that would deliver pennies to the Treasury while kneecapping the very engine of our economic and medical progress. Yes, taxpayer dollars support early-stage research. But the real payoff comes later—in the jobs created, cures discovered, and industries launched when universities and private industry turn those discoveries into real products. By comparison, the sums at stake in patent licensing are trivial. Universities collectively earn only about $3.6 billion annually in patent income—less than the federal government spends on Social Security in a single day. Even confiscating half would barely register against a $6 trillion federal budget. And yet the damage from such a policy would be anything but trivial. The true return on taxpayer investment isn’t in licensing checks sent to Washington, but in the downstream economic activity that federally supported research unleashes. Thanks to the bipartisan Bayh-Dole Act of 1980, universities and private industry have powerful incentives to translate early-stage discoveries into real-world products. Before Bayh-Dole, the government hoarded patents from federally funded research, and fewer than 5% were ever licensed. Once universities could own and license their own inventions, innovation exploded. The result has been one of the best returns on investment in government history. Since 1996, university research has added nearly $2 trillion to U.S. industrial output, supported 6.5 million jobs, and launched more than 19,000 startups. Those companies pay…
Share
BitcoinEthereumNews2025/09/18 03:26
Trump foe devises plan to starve him of what he 'craves' most

Trump foe devises plan to starve him of what he 'craves' most

A longtime adversary of President Donald Trump has a plan for a key group to take away what Trump craves the most — attention. EX-CNN journalist Jim Acosta, who
Share
Rawstory2026/02/04 01:19
Why Bitcoin Is Struggling: 8 Factors Impacting Crypto Markets

Why Bitcoin Is Struggling: 8 Factors Impacting Crypto Markets

Failed blockchain adoption narratives and weak fee capture have undercut confidence in major crypto projects.
Share
CryptoPotato2026/02/04 01:05