DeepSeek Launches Experimental Model DeepSeek-V3.2-Exp with Significant Efficiency Improvements

The Chinese company DeepSeek, specialized in artificial intelligence development and based in Hangzhou, has announced the launch of the new experimental model DeepSeek-V3.2-Exp, which it described as "more efficient in training and better at processing long texts" compared to previous versions of its language models.
The company explained in a post on the developer forum "Hugging Face" that this release represents "a transitional step towards next-generation architecture," referring to its upcoming project that is expected to be one of the most prominent launches since the emergence of models V3 and R1, which received wide acclaim in Silicon Valley and among global investors earlier this year.
According to the announcement, the model uses a new mechanism called DeepSeek Sparse Attention, which the company stated "reduces computing costs and enhances the model's performance in certain applications." DeepSeek also announced via its account on the "X" platform on Monday a reduction in API (API) prices by more than 50%.
Although expectations suggest that the new architecture may not have a significant impact on the markets as previous releases did, its success could put local competitors like Qwen, affiliated with "Alibaba", and American companies like OpenAI, under increasing pressure, especially if DeepSeek manages to deliver "high capabilities at a much lower cost" compared to competitors in model development and training.