
Meta says more GPU needs
Zuckerberg said that Deepseek’s achievements with the limited budget reinforce his beliefs that Meta is on the right path. Zuckerberg also said that Deepseek is trying to understand the innovations provided by Deepseek and that Meta plans to integrate them into the Llama platform.
Deepseek’s influence led to a large wave of sales in the artificial intelligence market. Investors began to sell AI shares with the concern that artificial intelligence models would now need less processing power. However, Zuckerberg said that Meta’s billion -dollar investments in GPUs will not go to waste and said that such investments would provide a strategic advantage over time. “I believe that intense capital expenditure and investing in the infrastructure will be a strategic advantage over time, Zuk Zuckerberg said.

Meanwhile, the financial situation of Meta is quite strong. In the fourth quarter of 2024, the company’s revenues were recorded as $ 48.39 billion, which corresponds to a 22 %increase compared to a year ago. The net profit reached $ 20.8 billion and increased by 43 %.
“Missing” perspective of the so -called experts
Most media and “experts” said Deepseek’s high amounts of GPU for firms for artificial intelligence revealed that it was actually a deception and empty investment frenzy. However, these so -called experts make this comment without knowing what artificial intelligence is and how it works. It is natural that those who cannot see that the development of the artificial intelligence of the sources of information processing is towards helping the models better “reason” from the educational stage to this error.
Imagine that everyone says that accessible data all over the world are “consumed” for artificial intelligence education. If this is the case, is it necessary to spend up to 2 years ago to educate a new model? No, new models can be obtained from already trained models. You may think of them as buildings built on top of each other. The second stage is now “reasoning”. Of course, we are not talking about a real reasoning; We’re talking about the model’s extra “thinking” for your request. Models like OpenAI O1, O3 and Deepseek R1 do this. So what’s going on in the back while doing this? Of course, more information processing power is spent than classic models. Now add the “artificial intelligence agents” that we will often hear in 2025, that is, artificial intelligence that works autonomously for you. It is obvious that the need for information processing will not decrease, contrary to it will increase.