The appeal is obvious: Chinese open-source AI models are capable and cheap. As many as 80% of American AI start-ups rely on Chinese open-weight models, Silicon Valley venture capitalists estimate. Chinese models now account for 17.1% of global downloads, ahead of the US at 15.8%, according to new research from MIT and Hugging Face, a repository of open-source AI software and datasets. Only two years ago, American models dominated, with more than 60% of downloads.
But the reliance on Chinese software comes with political, legal, and ethical risks. Although technical evaluations suggest that political bias in these models can be mitigated, they fail to take into consideration dangers in the licensing agreements. By adopting these open-weight models, firms bind themselves to contracts written under, and shaped by, Chinese regulation, which present ethical AI and human rights concerns.
Open-source software licensing has long been straightforward. The classic MIT license, drafted in the 1980s, totaled a mere 166 words and allows users to do almost anything with the code, provided they keep the copyright notice and accept a simple warranty disclaimer. As the software ecosystem expanded, some licenses introduced additional clauses while still preserving openness. The Apache 2.0 license, for example, adds clear rules on intellectual property rights and requires users to note when they have modified the software. The underlying philosophy remained constant: enabling broad use and encouraging innovation.
The rise of Responsible AI Licenses (RAILs) reflects a shift in how developers think about software openness. Unlike traditional software, open-weight AI models can be misused at scale, prompting creators to worry about downstream behavior. RAILs preserve the portability of open-source licenses while adding behavioral restrictions — usually about 13 separate provisions set out in an annex — on how their systems may be deployed. They typically contain a small set of principles that most AI ethicists would regard as common sense, such as bans on unlawful activity, harm, discrimination, defamation, and disinformation.
Take a close look at Chinese open-weight AI license agreements and red warning lights emerge. DeepSeek V3 removes restrictions on the use of AI for administering “justice, law enforcement, immigration or asylum processes, such as predicting an individual will commit fraud/crime.” In contrast, Europe has expressed concerns about AI in law enforcement, as expressed in the AI Act’s Article 5 provisions. The omission suggests that DeepSeek aligns with Chinese authoritarians. China leads globally in developing AI tools for surveillance and population management, and serious human rights concerns have been documented.
Other additions made by Chinese firms are also revealing. Tencent, for example, has inserted a clause in one of its AI licenses prohibiting any use of the model “that violates or disrespects the social ethics and moral standards of other countries or regions.” This could cover outputs that describe the atrocities of China’s Cultural Revolution, discuss the international status of Taiwan, or address other topics the CCP considers sensitive.
Any dispute over these terms would be litigated in Chinese courts. The English in Chinese translation closely matches Article 4 of China’s Deep Synthesis Regulation, which prohibits AI outputs that violate “respect for social ethics and morality” (尊重社会公德和伦理道德). The context is important and telling: Article 4 also requires AI to adhere to “correct political direction, public opinion guidance, and value orientation, and promote the positive and virtuous development of deep synthesis services.”
In English, phrases like “social ethics” or “moral standards” may appear harmless. In China’s political and legal system, they carry specific ideological content. Marxist-Leninist ethics is treated as a formal discipline, and instruction in it is mandatory for political elites. As former Australian Prime Minister and China expert Kevin Rudd has written, “No matter how abstract and unfamiliar his [Marxist-Leninist] ideas might be, they are having profound effects on the real-world content of Chinese politics and foreign policy.”
US and European AI entrepreneurs leveraging Chinese AI tools risk importing these politics into their products. While the risks created by this software layer are not as visible as hardware dependencies — where rip and replace of Chinese telecom supplies create massive financial burdens — they raise the possibility of legal entanglements, and the potential for political manipulation at the margins.
A stronger licensing standard should be developed. Choice of law and venue deserve particular attention. Singapore, London, or even Hong Kong, where many Chinese companies have overseas operations, could serve as a neutral forum for AI-related disputes.
Chinese case law also needs to be more accessible so that commercial decisions can be reviewed and understood. If courts in China are subject to political “rule by law” rather than adhering to “rule of law,” policymakers should consider limiting the use of these licenses through legislation.
Venture capital investors can help by encouraging portfolio companies to examine license terms and choose models that pose lower commercial and political risk. Chinese developers seeking access to the US or European and other liberal democratic markets will face pressure to amend their licenses accordingly and adopt standard responsible-AI provisions.
Governments, including the US Center for AI Standards and Innovation, and similar bodies abroad, should work together to create international best practice licensing standards that protect rights and prevent political misuse of AI technologies. They should monitor when Chinese AI model developers use their existing license agreements against licensees for abusive purposes. Digital economy trade agreements also offer a path to better tools for enforcement, dispute resolution, and shared norms.
Open-weight AI models represent a new category of software. They offer many advantages, speeding innovation. But their risks should be addressed and mitigated.
Seth Hays is a Senior Fellow with the Tech Policy Program at the Center for European Policy Analysis (CEPA). Seth is Managing Director and Co-Founder of APAC GATES, an Indo-Pacific-based not-for-profit management consultancy. He brings over two decades of experience in the not-for-profit sector in Asia, including work with governments, leading universities, think tanks, and civil society organizations across the region.
Bandwidth is CEPA’s online journal dedicated to advancing transatlantic cooperation on tech policy. All opinions expressed on Bandwidth are those of the author alone and may not represent those of the institutions they represent or the Center for European Policy Analysis. CEPA maintains a strict intellectual independence policy across all its projects and publications.
2025 CEPA Forum Tech & Security Conference
Explore the latest from the conference.