Huawei’s AI research division has publicly denied claims that its Pangu large language model (LLM) copied or shares substantial similarities with Alibaba’s Qwen model, amid growing scrutiny in China’s rapidly evolving artificial intelligence sector.

The Allegations
The controversy began circulating online after technical similarities were observed between outputs of Huawei’s Pangu 3.0 model and Alibaba Cloud’s Qwen-7B, sparking speculation within the AI developer community. Some developers and analysts pointed to overlapping code behavior, prompting concerns that Huawei may have reused or fine-tuned Qwen checkpoints to develop its own model.
These claims, though largely based on anecdotal comparisons, gained traction on Chinese social media and tech forums, prompting both companies to respond.
Huawei’s Response: A Strong Denial
In a formal statement, Huawei’s Noah’s Ark Lab, the research team behind the Pangu models, firmly denied the allegations:
“Pangu 3.0 was developed entirely in-house with independent architecture, training data, and optimization strategies. No third-party checkpoints, including those of Qwen, were used at any stage of its development.”
Huawei emphasized that Pangu was built using proprietary datasets and computational resources, and follows strict internal protocols to ensure originality and security. The lab also noted that coincidental similarity in output or structure can occur in LLMs due to the shared nature of open benchmarks and training tasks.
Alibaba’s Position
Alibaba has not formally accused Huawei of misconduct but acknowledged that discussions around model similarity are common in an open-source landscape. A spokesperson for Alibaba Cloud stated:
“Qwen is an open-source model, and we encourage transparency and innovation in the AI community. We believe in fair use but expect proper attribution and responsible development.”
Open-Source and IP in the AI Era
This incident underscores the complexities of intellectual property (IP) and originality in the age of large language models, where many models are trained on overlapping public datasets and often share architectural similarities (e.g., transformer-based frameworks).
As open-source AI models proliferate, it becomes increasingly difficult to distinguish between inspiration, fine-tuning, and outright duplication—especially when internal checkpoints and training logs are not made public.
Industry Impact
- Transparency and reproducibility are becoming critical as AI adoption accelerates.
- Tech companies are being pressured to clarify the provenance of their models, especially in China’s competitive AI market.
- The case highlights a broader challenge: ensuring trust while maintaining proprietary innovation.
What’s Next?
Huawei has reaffirmed its commitment to developing foundational AI technologies independently, and it’s likely to increase its disclosure around model development processes going forward to avoid similar controversies.
Meanwhile, industry observers suggest that clearer industry standards and third-party audits may be needed to resolve such disputes in the future—especially as foundation models become key digital infrastructure.
The dispute between Huawei and the rumors around its Pangu model highlight the growing tension between open-source collaboration and proprietary AI development. While Huawei has strongly denied any overlap with Alibaba’s Qwen, the incident serves as a reminder that transparency, documentation, and ethical practices are becoming essential for companies building AI at scale.