News

The Copilot+ PC with ARM processor already have native support for Depseek

If we talk about artificial intelligence, the first name that comes to mind in Chatgpt of OpenAi, COPILOT of Microsoft and Gemini from Google. For a few days, he has joined Deepseeka new open source AI that any user can urge it locally without using an Internet connection.

To be able to use it, it is necessary to have a state -of -the -art graphics card to obtain the highest performance. AMD has posted a performance test a few hours ago where it states that the RX 7900 XT It offers a much higher performance to NVIDIA RTX 4090.

If we do not have a dedicated graph, something quite common in portable teams, with a Neuronal processing unit It is more than enough.

If we talk about Neuronal processing unit (NPU), we talk about a processor designed to execute Artificial intelligence tasks local without depending on a dedicated graph or an internet connection.

Since mid -2024, with the presentation of the first generation of processors with ARCHITURE ARM From Qualcomm, Microsoft has invented the term COPILOT+ PCa term that applies only to portable equipment that They include an NPU with performance equal to or greater than 45 tops.

The Deepseek arrival has meant an important stir in companies dedicated to AI, since it has been trained with an economic cost much lower than the OpenAi models used by the NVIDIA GPUs, questioning if we are in a AI bubble and causing a spectacular drop in the shares of the company directed by Jensen Huang.

Qualcomm Snapdragon X processors with Depseek support

Microsoft, one of the companies that, from the beginning, is betting very strong for artificial intelligence by Copilot, available in Windows 11 and in the Office of Office set, has just announced that you already have an optimized version ready for The NPU of the Deepseek-R1 Qualcomm processors.

In this way, all equipment compatible with co -philot+ PC can take advantage of this NPU in the most efficient way possible. According to the company, this Optimized version for Deepseek NPU will offer time rates to the first token and very competitive performance and its operation will hardly affect the battery consumption

It should be remembered that the neuronal processing unit has been specifically designed to perform local AI tasks without having a dedicated graph and has a very small battery consumption to perform the same tasks that we can do on a PC of desktop or laptop using a graph.

This initial version will have Deepseek-R1-Distill-Qwen-1.Bmodel analyzed by an AI research team from the University of Berkeley and that has a high percentage of correct answers. In the coming weeks, support for models with 7,000 and 14,000 million parameters will also be introduced.

First Qualcomm, then the others

That Qualcomm processors are the first to receive support for Depseek should not surprise since the relationship between both companies has been very good in recent years. One more test of this excellent relationship was seen in mid -2024 when Microsoft and Qualcomm presented the new Snapdragon X processors, teams that, as we have mentioned above, premiered the Copilot+ PC certification.

So much Intel with the Core Ultra 200 As AMD with the Ryzen AI they also have this certification, since both integrate an NPU with a minimum yield of 45 tops.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *