AI brings intelligence but not wisdom: Why Anthropic needs a human rights policy
The article was co-authored with Iain Levine and originally published on March 2nd 2026 as a LinkedIn article.
Any deployment of AI by the defense sector must comply with international law. Companies developing AI technologies must ensure they have standards, procedures, and guardrails governing the development, deployment, and use of their technology, including due diligence prior to sales. The only way to do this consistently and effectively is to have a human rights policy that sets out the company’s commitments to respect international human rights law (IHRL) and the laws of war or international humanitarian law (IHL), and describes how these commitments will be implemented through a system of due diligence, transparency, and access to remedy.
The standoff between Dario Amodei, the CEO of Anthropic, and Pete Hesgseth, the US Government’s Secretary of Defense, has created headlines around the world. It was gripping theater: two of the most powerful and influential men in the world were at an impasse over the application of values and ethics to the world's most talked-about technology.
It is absolutely right that this deadlock has attracted such global attention. The stakes are huge and go far beyond the positions and actions of even the most powerful government in the world and one of the biggest players in the AI industry. What’s at stake are the values and standards that define the relationship between defence and security actors and the companies that create and sell AI technologies, which are increasingly fundamental to effective military action.
Amodei’s insistence that “democratic values” had to prevail and that Anthropic would not accept its product being used for mass surveillance of Americans or for fully autonomous lethal weapons was much-needed assertiveness.
We applaud Dario Amodei for his stance. By citing “democratic values” and pushing back against the Pentagon’s unfettered demands, he has made clear that placing values, safety, and responsibility at the heart of AI design and deployment is essential. However, while “democratic values,” “safety,” and “responsibility” are important priorities and principles in this discourse, they are subjective, malleable, and too easily contested. We urge Dario Amodei to build on his repudiation of the Pentagon and, going forward, explicitly root the company's work in respect for human rights.
The same should be true of any AI company contemplating a relationship with the defense sector. A clear commitment to global human rights standards is essential because human rights provide universal and internationally recognized norms and language that set out the company's responsibilities to its stakeholders. A commitment ensures consistency over time, across geographies, and amid political and economic pressures by engaging with the military and security sectors to protect and respect civilians.
As Isaac Asimov noted some 40 years ago, "The saddest aspect of society right now is that science gathers knowledge faster than society gathers wisdom." AI brings intelligence, but it does not bring us wisdom. We believe that only human rights will provide the clarity and wisdom required to ensure that AI technologies are used as safely as possible and with regard to the human rights risks faced by civilians caught up in conflict zones where they are applied.
The use of AI by the military is not inherently a human rights violation, but IHRL and IHL regulate the conduct of armed conflict and establish essential limits on its use. For example, AI should not be deployed in ways that harm those not participating in war, such as humanitarian workers, prisoners, and the wounded, nor should it be used in ways that fail to distinguish between military and civilian targets. Companies that provide AI systems to military customers should undertake human rights due diligence and establish strategies—such as policies, guardrails, stakeholder engagement, and technical limitations—to address the risk that their products may be used in violation of IHRL and IHL.
A foundational step in this due diligence is establishing a human rights policy that sets out the expectation that the company’s products are used in ways that respect human rights. With a human rights policy as a foundation, companies selling AI to the military would be wise to state, in clear and unambiguous terms, that their military customers must meet their existing duties under international law, including IHRL and IHL. Indeed, the US and its allies have already committed to doing just this, stating in the Political Declaration on Responsible Military Use of Artificial Intelligence and Autonomy that the use of AI in armed conflict must be in accordance with States’ obligations under international law, especially international humanitarian law.
However, it is not enough to shift responsibility to the military; companies must also act when it is reasonably foreseeable that customers will use their products to violate IHL and IHRL. Recent actions clearly indicate that the current US administration poses this risk, as do many other militaries worldwide.
This approach is not novel but is based on the well-established UN Guiding Principles on Business and Human Rights (UNGPs). The UNGPs define companies' human rights responsibilities, including the obligation to respect IHL standards in situations of armed conflict.
A core expectation for companies set out in the UNGPs is that they have a human rights policy or statement that sets out clearly their responsibility to respect human rights and what this means for the company's business model and operations. Dario Amodei has warned that AI poses a civilisational challenge, yet his company doesn’t have a human rights policy or a statement of commitment to human rights, as the UNGPs insist it should.
There are several crucial questions regarding the use of AI in the defense sector, including AI-supported decision support systems, lethal autonomous weapons (or “killer robots”), and drone swarms. As AI companies become closer partners with the defense sector, they will play a defining role in determining how these crucial questions are resolved. A commitment to human rights should serve as the foundation.
A central concept at the heart of these challenges is ensuring effective “human in the loop” guardrails. Humans must remain in control of weapons systems and decisions as to whether a human life can be taken. As the UN Secretary-General has argued, “there is no place for lethal autonomous weapon systems in our world. Machines that have the power and discretion to take human lives without human control should be prohibited by international law.” Without human oversight, there will be neither respect for human dignity nor accountability for harm to civilians.
A human rights policy is not just good for users and other rightsholders, but also good for business because it helps address legal compliance and reputational concerns. While there is much emphasis in the AI world on the critical need for innovation, which has tended to drive companies to oppose regulation, innovation ultimately depends on trust. And trust in AI will only be achieved when companies demonstrate that they are serious about addressing its potential harms through a clear and unambiguous commitment to human rights. We believe that Almodei is not only taking a principled position for its own sake but also demonstrating that he understands this will benefit the company in the long term.
As the United Nations human rights chief, Volker Turk, noted in his remarks at the global AI Impact Summit in India in February 2026, “AI that benefits humanity will only happen if we wire human rights into AI products by design”. The issues raised in this standoff are not confined to Anthropic and the Pentagon. As AI technologies increasingly become indispensable to military and security forces globally—for intelligence, surveillance, logistics planning, supporting the management of military engagements, and lethal autonomous weapons systems—AI companies will face a similar dilemma again and again. We believe that all companies developing AI should maintain a human rights policy setting out the company’s commitment to respecting IHL and IHRL, its expectation that customers do the same, and the steps it will take to address these risks. This is a core strategic step that will strengthen the hand of those seeking responsible approaches to AI development and deployment.