ESG in the Age of Artificial Intelligence: When Companies Draft the Rules of the Game
- רוני ליאור
- 15 hours ago
- 2 min read
Anthropic, the company behind the AI tool Claude, recently published "Claude's Constitution" - a detailed public document outlining the values, ethical principles, and operational boundaries of its AI system. Unlike a generic code of ethics, this is a working document that explicitly acknowledges that AI is not "objective," but is driven by social and moral assumptions embedded at the core of the product itself.
At first glance, this looks like a moral statement. But is it really?
A Rational Economic Move
Looking through the lens of economist Oliver Williamson's transaction cost theory, Anthropic's decision to publish a constitution is not necessarily a values-driven gesture - it's a rational economic strategy. In environments characterized by uncertainty, information asymmetry, and weak governmental regulation, companies need internal protection mechanisms to reduce risk and stabilize operations. When state-level safeguards are unreliable, building your own becomes a competitive advantage.
The Shifting Balance of Power
In recent decades, large corporations have grown wealthier and more powerful than most nation-states. Governments increasingly depend on tech giants to manage critical infrastructure, and corporations hold a clear advantage when it comes to data and information control. In this reality, the pressure to regulate corporate behavior has shifted away from governments toward other mechanisms - including financial markets, shareholders, and the companies themselves.
Voluntary Regulation as a Governance Tool
Anthropic's constitution is a form of voluntary self-regulation - not imposed by the state, but emerging from within the business sector. By publicly committing to ethical boundaries (for example, Claude will not assist in creating manipulative content, even when asked), Anthropic signals to consumers, regulators, and business partners: "This tool is not a weapon; it can be trusted". Furthermore, by shaping its own rules now, the company aims to get ahead of potentially far stricter government or supranational regulation down the line.
This logic was put into action beyond paper: Anthropic publicly confronted the U.S. Department of Defense, demanding that the Pentagon commit to boundaries on how Claude could be used - before agreeing to work with them.
A New Model for ESG?
Anthropic's constitution may represent a broader shift - one where ESG, voluntary regulation, and technological innovation converge. A model in which corporate responsibility is not at odds with business interest, but an inseparable part of it.
Yet important questions remain: In an era where businesses are shaping the very regulations they operate under - what is the new role of regulators? Do they still represent the broader public interest? And what tools do governments need to navigate this challenging new landscape?
To read the full article, visit our website in Hebrew.

