
Resisting CANSEC with Quaker Roots
April 7, 2026
Quakers are credited with being the original socially responsible (SRI) investors. Their primary focus, and that of other SRI pioneers like Peter Kinder and Amy Domini who followed in the 1970s-90s, was companies involved in war. SRI investors of that era, and ever since, have had an exclusionary screen stating that they will not own companies who make weapons used in war.
The metric usually applied to define whether a company is “involved” in a given industry (the military, for example) is a percentage of revenue. So, for example, if a given company derives more than 10% of revenues from the sale of weapons, it would be excluded. But if revenues are under 10%, it’s deemed to be in a different industry. The logic was/is “if it’s a very small amount” it’s not central to their business and therefore should not be excluded.
It may be time to reconsider this approach, and I think that as more portfolios have core positions in large tech companies, we need to think about a number of issues particular to the tech industry and the military.
Specifically, we should consider scale distortion (these companies are so big that military contracts, while well below 10%, are still huge and the impact of their products can be very lethal and problematic for SRI investors and human beings), contractual complicity/opacity (a product or service may be sold for one purpose, or at least presented that way, but used for another), and corporate ethos/the CEO’s ethos—Google founders Larry Page and Sergey Brin famously had a “do good” corporate philosophy lionized by the motto “Don’t be evil.”
As SRI investors we’d certainly like to avoid evil companies. A few recent examples of why SRI investors may want to reflect on how we address “military involvement”:
Project Maven
This was originally a U.S. Department of Defense (DOD) drone surveillance program handled by Google. In 2018, after over 4,000 employees signed a petition demanding that Google drop the contract, they did. I mention this because it shows that individual action can sometimes affect corporate behaviour! The contract was picked up by Amazon Web Services and later, in 2023, by Palantir. Today Maven uses machine learning algorithms to analyze and fuse vast amounts of surveillance data and is used by the military to identify and destroy targets. It has 25,000 users across the US military. The contract size has grown from $480 million in 2023 to $1.4 billion, and it is being used in the war with Iran. While Palantir manages the contract, it runs on Amazon Web Services and the program uses Anthropic’s AI tools (a version of Claude).
Portfolios I manage don’t own Palantir (and likely never will) and Anthropic is private (likely to go public this year or next), but consider Amazon. This isn’t their only DOD contract—their total exposure to the US military is less than 1% of their revenue. But it’s still $10 billion! (My point on scale.)
Both Anthropic’s and Amazon’s products were not made for military applications, but their products can be and are being used in war. (My point on contractual complicity/opacity.)
Project Nimbus
This is a $1.2 billion contract between the government of Israel and Google and Amazon Web Services. Its stated intent is to provide cloud computing services including AI.
Importantly, the contract stipulates that Israel is “entitled to migrate to the cloud or generate in the cloud any data they wish.” It became public that Israel was using the technology in their war in Gaza, and again a number of Google employees signed a petition and disrupted corporate meetings. Google responded by firing 50 employees. What does this say about the company that would “do no evil”? As SRI investors, should we incorporate this information into our analysis? The contract represents less than 0.4% of Google’s revenues but their product is being used in active combat.
The recent conflict between Anthropic and the Trump administration
Anthropic was formed as a “public benefit corporation” in 2021 by Dario Amodei, his sister Daniela, and a small group of programmers who left OpenAI because they felt that it had wavered from its initial goal of incorporating AI safety into development decisions.
Their focus at Anthropic has been to develop AI tools for enterprises rather than individuals, and so it’s not surprising that they found themselves in the US military’s orbit. In July of 2025, Anthropic, OpenAI, xAI and Google all signed $200 million Department of Defense contracts that allowed the military to use their most advanced models for a variety of operations.
In February, Anthropic said that they would not allow the military to use their product to conduct mass civilian surveillance or build autonomous weapons. Amodei decided to walk away from the contract; he said that those two points created a “red line” that Anthropic would not cross, and later said that “disagreeing with the Government is the most American thing in the world.”
The military (Secretary of Defense Pete Hegseth) claimed that Anthropic was in breach of contract. OpenAI moved in two hours later and picked up the contract. Sam Altman, OpenAI’s CEO, said that the Government/military committed to following the law, but of course there are no laws yet around AI. Then the US government declared Anthropic a “supply chain risk”—something that had never been applied to an American company (in the past it has been used against companies from foreign “enemy” states). This precludes Anthropic from any government contracts and also precludes any other companies that have contracts with the US government from using Anthropic products.

OpenAI’s flagship product ChatGPT
It is a remarkable development that could destroy Anthropic, a company valued at about $500 billion and by some accounts as big as OpenAI. It is also the first time that a tech CEO has stood up to the Trump administration. It has also driven customers away from OpenAI (ChatGPT) to Anthropic (Claude)—annualized revenue at Anthropic has shot from $13 billion to $19 billion since Amodei’s decision; most of that revenue has come from OpenAI, where “uninstalls” jumped 295% in the week after they picked up the contract.
In the forum of public opinion, Amodei and his company are being seen as “good,” Sam Altman and his company (owned 20% by Microsoft) as “evil.” Many customers and some employees are voting with their pocketbooks and feet. (There is also some pushback and criticism of Amodei and his “red line.”)
This conflict is interesting to SRI investors for many reasons. Is it a classic example of a company doing “the right thing” and being financially rewarded for it (a key, original tenet of SRI)? Do we incorporate CEO behaviour and moral “red lines” into our SRI analysis? Now that the military does a lot more than just fight wars, and not just with guns and ammunition, do we need to broaden our definition of “military”?
I don’t really have any answers, but perhaps SRI investors need to consider “military involvement” with more nuanced, comprehensive criteria than just percentage of revenues.
This guest post was written by Alan Harman, a Director and Portfolio Manager at ScotiaMcLeod. See the first post in this series: Are tech stocks the new tobacco stocks?




