On September 30, China’s AI chip leader Cambricon announced the completion of its private placement, issuing 3.3349 million shares at 1,195.02 yuan each, raising nearly 3.985 billion yuan.
After deducting issuance costs, the net proceeds amounted to 3.953 billion yuan, which will be used for investment in chip platform projects for large models, software platform projects for large models, and replenishing working capital.
According to the announcement, the final list of 13 investors in the private placement included GF Fund, UBS AG, New China Asset Management, Huatai-PineBridge Fund, Nord Fund, Guotai Haitong Securities, Guosen Securities (Hong Kong) Asset Management, China Universal Asset Management, Bosera Fund, Harvest Fund, E Fund, Huashang Fund Management, and Shenwan Hongyuan Securities.
As of the close on September 30, Cambricon’s stock price was 1,325 yuan per share. In early September, Goldman Sachs had raised Cambricon’s 12-month target price to 2,104 yuan.
After this issuance, the subscribed shares cannot be transferred within six months from the completion of the issuance, unless otherwise stipulated by laws, regulations, or normative documents. Shares derived from stock dividends or capital reserves converted to shares will also follow the above lock-up arrangement.
Cambricon said that after the registration of the new shares issued to specific investors, 3.3349 million restricted circulating shares will be added. This issuance will not change the company’s control, and Tian Shichen will remain the controlling shareholder and actual controller. Once the funds are in place, the company’s total assets and net assets will increase accordingly, improving financial conditions, optimizing the asset-liability structure, and enhancing overall strength and sustainable development capacity.
The company said that by implementing the investment projects funded through this issuance, the company’s comprehensive competitiveness in chip and software technology for large models will be significantly enhanced, forming a technical capability matrix for large-scale model applications, and this will facilitate the rapid construction of optimal chip and software combinations tailored to different client needs.
The implementation of these funded projects will strengthen the company’s technical competitiveness in the large model market, further enhance core competitiveness, and consolidate market position, it added.
On September 29, DeepSeek announced that its official app, web version, and mini-program had all been updated to DeepSeek-V3.2-Exp. DeepSeek explained that thanks to a significant reduction in the cost of servicing the new model, the official API price was also lowered. Under the new pricing policy, developers’ costs for using the DeepSeek API will drop by more than 50%.
On the same day, Cambricon announced that it had synchronized support for DeepSeek-V3.2-Exp, the latest model from DeepSeek, and open-sourced the vLLM-MLU large model inference engine. The code repository and testing steps were provided, allowing developers to experience the highlights of DeepSeek-V3.2-Exp on Cambricon’s software and hardware platform immediately.
Cambricon said that the company has always placed great emphasis on the software ecosystem for large models, supporting all mainstream open-source large models, including DeepSeek, and leveraging its long-term active ecosystem and technological accumulation, Cambricon was able to quickly achieve day-zero adaptation and optimization for the new experimental model architecture DeepSeek-V3.2-Exp.
For this new architecture, Cambricon implemented fast adaptation through Triton operator development, achieved extreme performance optimization using BangC fusion operator development, and reached industry-leading computational efficiency with a computation-communication parallel strategy, it said.
In addition, on September 30, Zhipu Ai officially released and open-sourced the next-generation large model GLM-4.6. As the latest version in the GLM series, GLM-4.6 has achieved comprehensive improvements in real-world programming, long-context processing, reasoning ability, information retrieval, writing, and intelligent agent applications. This is another major technical release before the National Day holiday, following DeepSeek-V3.2-Exp and Claude Sonnet 4.5.
Zhipu Ai said that GLM-4.6 has been deployed on Cambricon’s leading domestic chips using FP8+Int4 mixed quantization, marking the first FP8+Int4 integrated chip solution for model deployment on domestic chips. This significantly reduces inference costs while maintaining precision, creating a feasible path for localized large model operation on domestic chips.