Understanding-Open-AI-and-Its-Lack-of-IA-Chips-Made-with-a-FZ-Wafer

Open AI, Deep Seek, and Wafers: What’s Changes for the Semiconductor Industry

Author:

January 24, 2025

Shop Now

On January 10th, DeepSeek-R1 was launched, and as their logo prophesizes, the emergence of this Chinese tech titan made a big wave in the ocean of AI technologies. Unlike widely spread Chat GPT, Deep Seek works without AI chips made with semiconductors, commonly using an FZ Wafer due to its unique properties.  

The results of this change are still to be seen, but the undeniable truth remains that semiconductor manufacturers are worried about the impact such change could have. So, what makes Deep Seek so different from Open AI—and how does this affect silicon wafers?  

An Earth-Shattering Whale  

DeepSeek is a Chinese AI startup that has recently unveiled several impressive generative AI models. One of those models, DeepSeek R1, is a "reasoning model" that ponders a lengthy series of arguments before responding.  

Many people believe that the most promising direction for AI research is this kind of reasoning, which was introduced by OpenAI last year and is a relatively new paradigm. In terms of performance, DeepSeek's latest model is comparable to OpenAI's o1 model from September of last year.

However, compared to OpenAI's models, DeepSeek requires significantly less time and money to train and run. According to a study, DeepSeek claimed that training its V3 model only cost $5.576 million. In comparison, OpenAI CEO Sam Altman stated that the cost of training its GPT-4 model was more than $100 million—with more millionaire investments on the way.  

To achieve that, Nvidia’s advanced H100 chips, widely known as “AI chips,” were not needed.

What Are AI Chips?

Artificial intelligence (AI) chips are specialized computer microchips for AI tasks. They are frequently designed with machine learning, data analysis, and natural language processing in mind. AI chips include, for example, application-specific integrated circuits (ASICs), field programmable gate arrays (FPGAs), and graphics processing units (GPUs).

AI chips can be made from various materials, but FZ wafers are a common choice due to their incredibly pure crystalline silicon. FZ silicone is specifically used in AI accelerator chips and microprocessors to achieve faster, more efficient, and scalable results.

Why Doesn't Deep Seek Need IA Chips?  

The truth is that saying DeepSeek doesn’t use AI Chips is an error: it simply uses less advanced ones.  

In 2022, the Biden administration imposed restrictions on chip exports, claiming that U.S. chipmakers such as Nvidia could not ship the most powerful GPUs (graphics processing units, the go-to chip for training AIs) to countries outside the United States.  

Nvidia, in particular, could not sell its more powerful H100 chips to China despite the fact that tech and AI companies in the United States can freely use them. As a result, DeepSeek had to settle for H800 chips that the U.S. permitted Nvidia to sell in China.

Due to a shortage of processing power, Chinese researchers were compelled to use less memory when training and running AI models. This allows DeepSeek to build a reliable model at a fraction of the cost.

What’s the Difference Between H100 and H800 Chips?

Nvidia's H100 and H800 chips' are very similar—in fact, the GPU inside of them is identical. The configuration surrounding them is altered, though. According to Nvidia, H100 delivers:  

  • AI training results up to 9 times faster  
  • Performance of inference up to 30 times faster  
  • Up to three times faster performance in specific workloads compared to the A100 with NVLink Switch System.

The Nvidia H800, on the other hand, is a modified version of another chip designed specifically for sale in China due to export regulations. The primary distinction is that they have a lower chip-to-chip data transfer rate (approximately 300 GBps as opposed to 600 GBps for the H100).

Understanding DeepSeek and Its Lack of IA Chips Made with a FZ Wafer

How Did Deep Seek Train AI Models with Nvidia H800 Chips?  

By employing a "mixture of experts" methodology, the DeepSeek models are able to activate only a subset of their parameters that are specifically tailored to a particular kind of query. This improves speed and saves computing power.  

This strategy was not created by DeepSeek, but the company discovered innovative ways to use the architecture to cut down on the amount of time needed for computer processing during pretraining.  

When AI models are trained to respond to a query, they must evaluate enormous amounts of data in order to optimize their parameters and provide precise answers to user inquiries. This requires collecting and storing a lot of information about the problem and its possible solutions as it works, which requires a lot of memory and processing power.

DeepSeek’s biggest innovation was finding a way to save time when the model is "thinking" by storing the context data in a compressed form. This improves speed and saves memory without sacrificing the quality of the response the user sees.  

Does This Mean the End of AI Chips?  

Nvidia, one of the biggest chip manufacturers in the world, saw its stock value fall nearly 17% due to DeepSeek’s new AI models. This was the biggest one-day loss in U.S. history. The Chinese startup’s models showed that successful AI models could be trained with much less computing power and in much less time than was previously believed.  

This led to investors questioning whether the demand for the expensive Nvidia GPUs would continue to rise or plummet. To understand the level of demand for these chips, it’s necessary to note that Nvidia’s Blackwell GPUs were almost fully sold out for 2025 already.

If the demand for AI stays the same, these reduced computing requirements will result in lower revenue than anticipated for investors. To make matters worse for Nvidia and other chip manufacturers, DeepSeek released its model relatively open-source, which means anyone with a laptop and an internet connection can download it for free.  

However, this doesn’t mean AI chips are no longer the future.

If DeepSeek makes access to these AI models much more affordable, it may increase overall demand for AI services, resulting in significantly more revenue for chip companies. Moreover, it may help solve the wafer shortage that severely affected the AI industry.  

FZ Wafer Used in AI Chips

Understand the Future of FZ Wafers and AI Chips  

The long-term impacts of DeepSeek’s new AI models on wafer and chip manufacturers are still deeply uncertain. While demand for advanced H100 chips may lower, AI chips are still crucial to many startups and technologies today. Moreover, lack of access to computing power is still a primary obstacle for DeepSeek.  

Nevertheless, an immediate assumption could be that advanced chip costs—and, as a result, advanced AI models—could lower. Here at Wafer World, we’re attentive to see what’s to come. If you’d like to learn more about the semiconductor industry and our products, reach out!

Wafer World Banner