AI and Military Use: The New Battleground Between Silicon Valley and the Pentagon

· 23 min read · Read in Español
Share:

AI and Military Use: The New Battleground Between Silicon Valley and the Pentagon

Commercial artificial intelligence has been formally integrated into US military operations, sparking an unprecedented crisis between the Pentagon and the companies that build it. On July 14, 2025, the Department of Defense’s Chief Digital and Artificial Intelligence Office (CDAO) awarded contracts capped at $200 million each to the four leading AI companies — Anthropic, OpenAI, Google, and xAI — totaling an $800M program to develop frontier AI capabilities for national security. The first real test came on January 3, 2026 with Operation Absolute Resolve in Venezuela, after which Wall Street Journal reports revealed that Anthropic’s Claude was used during the active operation via Palantir. By mid-February 2026, the Pentagon is threatening to designate Anthropic a “supply chain risk” for refusing unrestricted military use of its AI, while OpenAI, Google, and xAI have already accepted the Department of Defense’s terms. This report exhaustively documents every aspect of the situation, rigorously distinguishing between verified facts, journalistic reports, and speculation.


1. Anthropic and the Pentagon: a $200M contract at risk

The CDAO contract

Verified fact (primary source: Anthropic press release, July 14, 2025; official CDAO announcement). Anthropic signed an Other Transaction Agreement (OTA) prototype contract capped at $200 million over two years, awarded by the CDAO. The contract covers frontier AI prototype development for national security, including Claude Gov (customized versions for national security clients) and Claude for Enterprise, all running on Amazon Web Services infrastructure. CDAO Director Doug Matty stated the contracts “will enable the Department to leverage the technology and talent of America’s frontier AI companies to develop agentic AI workflows across diverse mission areas.”

For context on the launch of Claude Opus 4.6 — the model that triggered the stock market “SaaSpocalypse” — this is the same technology now operating on classified Pentagon networks.

Anthropic’s usage policy

Verified fact (primary source: anthropic.com/legal/aup, version effective September 15, 2025). Anthropic’s Usage Policy explicitly prohibits: producing, modifying, or designing weapons and explosives; designing weaponization processes; and synthesizing biological, chemical, radiological, or nuclear weapons. It also bans “battlefield management applications” and surveillance without consent. However, the policy contains a crucial governmental exception clause allowing Anthropic to “enter into contracts with certain government clients that tailor use restrictions to that client’s public mission and legal authorities, if, in Anthropic’s judgment, the contractual restrictions and applicable safeguards are adequate to mitigate the potential harms.”

Anthropic’s two red lines

Source: Axios, February 15-16, 2026, with Anthropic spokesperson confirmation. Anthropic maintains two limits it refuses to negotiate: a ban on mass surveillance of US citizens and a ban on fully autonomous weapons without human intervention. The Pentagon considers these categories too “gray area” to be operationally useful and demands that all AI companies allow their tools for “all lawful purposes,” including weapons development, intelligence gathering, and battlefield operations.

The “supply chain risk” threat

Source: Axios, February 16, 2026; Pentagon spokesperson Sean Parnell (directly attributed statement). Defense Secretary Pete Hegseth is reportedly “close” to severing commercial ties with Anthropic and designating it a “supply chain risk” — a penalty normally reserved for foreign adversaries. This designation would force any company wanting to contract with the Department of Defense to certify it doesn’t use Anthropic’s models. Parnell stated: “The Department of War’s relationship with Anthropic is being reviewed. Our nation requires our partners to be willing to help our warfighters win in any fight.” A senior Pentagon official added: “It’s going to be enormously complicated to untangle, and we’re going to make sure they pay a price for forcing us to do it.”

Note on reliability: The most incendiary threats come from anonymous Pentagon sources via Axios, a highly reputable outlet. These statements may represent negotiating positions rather than finalized decisions. Anthropic responded that it maintains “productive, good-faith conversations” with the Department of Defense.

The Anthropic-Palantir alliance

Verified fact (primary sources: Palantir/Anthropic BusinessWire releases):

The alliance developed in two phases. Phase one was announced November 7, 2024: Anthropic and Palantir, along with AWS, provide access to Claude 3 and 3.5 models to US intelligence and defense agencies. Claude is operationalized within the Palantir AI Platform (AIP), hosted in Palantir’s Impact Level 6 (IL6)-accredited environment on AWS — one of the DoD’s most stringent security standards, corresponding to the SECRET level. Palantir described itself as “the first commercial industry partner to bring Claude models to classified environments.”

Phase two was announced April 17, 2025: Anthropic joined Palantir’s FedStart program, making Claude available to civilian federal agencies at FedRAMP High and DoD IL5 security standards, hosted on Google Cloud. The stated goal was to reach “millions” of federal workers.

Critical implication: Through this alliance, Claude became the first and, to date, only frontier AI model available on the DoD’s classified systems. This makes Anthropic indispensable in the short term but also places it at the epicenter of the controversy.


2. Operation Absolute Resolve: what we know and what’s speculation

Confirmed facts

Primary sources: White House official statements, Department of Defense (war.gov), General Dan Caine briefing. On January 3, 2026, US special forces executed Operation Absolute Resolve, capturing Venezuelan President Nicolás Maduro and his wife Cilia Flores at the Fort Tiuna compound in Caracas. The operation involved over 150 aircraft from 20 bases, including F-22s, F-35s, B-1 bombers, electronic warfare aircraft, and RQ-170 stealth drones. Ground forces were in the compound for approximately 30 minutes. Maduro and Flores were transferred to the USS Iwo Jima and then to New York, where they were arraigned on January 5 before Judge Alvin Hellerstein on narcoterrorism charges. Both pleaded not guilty.

Casualties according to different sources

Figures vary significantly. The Pentagon confirmed 7 US service members wounded and zero killed; 5 returned to duty and 2 remained in recovery. Regarding Venezuelan and Cuban casualties, the best consolidated estimate ranges between 75 and 83 dead, comprising 47 Venezuelan military (Defense Minister Padrino López’s final figure, January 16), 32 Cuban military/intelligence (confirmed by Cuba), and at least 2 civilians independently documented. Diosdado Cabello claimed over 100 total dead, a figure not independently verified. Airwars, the British independent monitor, identified at least two incidents with civilian casualties, including an airstrike in Catia La Mar that hit a three-story residential building.

What the WSJ reports about Claude’s use

Source: Wall Street Journal, circa February 13-15, 2026; confirmed by Axios. According to anonymous sources “familiar with the matter,” Claude was used during the active operation, not just in preparation, deployed through Anthropic’s alliance with Palantir. The WSJ also reported that an Anthropic employee contacted a counterpart at Palantir to ask how Claude had been used during the operation. Axios independently confirmed Claude’s use during the active operation but noted it “could not confirm the precise role Claude played.”

What is NOT known: Claude’s exact role in the operation has not been detailed by any primary source. Some secondary outlets speculated about “AI-assisted targeting” and “autonomous drone guidance,” but these specific applications are not confirmed by the original WSJ report. It has not been established whether Claude was used for targeting, intelligence analysis, document processing, or other functions.

Official responses

Anthropic stated: “We cannot comment on whether Claude, or any other AI model, was used for any specific operation, classified or otherwise. Any use of Claude is required to comply with our Usage Policies.” The company specifically denied discussing Claude’s use in specific operations with the Department of Defense or partners including Palantir. A source cited by Fox News indicated that Anthropic “has visibility into classified and unclassified use and is confident that all use has been in line with their usage policy.” Palantir declined to comment. The Pentagon did not officially confirm Claude’s use but a senior official described Anthropic’s inquiry to Palantir as concerning.


3. OpenAI: from banning military use to deploying on Pentagon networks

The policy change

Verified fact (primary source: OpenAI usage policies page, updated January 10, 2024; originally reported by The Intercept). OpenAI’s original policy explicitly prohibited activities with “high risk of physical harm, including: weapons development and military and warfare.” On January 10, 2024, without public announcement, OpenAI removed the categorical prohibition on “military and warfare.” The change wasn’t announced; OpenAI described it as a rewrite to make the document “clearer and more readable.” Spokesperson Niko Felix explained that “a principle like ‘Don’t harm others’ is broad but easy to understand.” Anna Makanju, VP of Global Affairs, acknowledged at Davos that “the blanket prohibition on military use made many people think many use cases were prohibited that people think are well-aligned with what we want to see in the world.”

The current policy (updated January 29, 2025) maintains the prohibition on “developing or using weapons,” “harming others or destroying property,” and unauthorized surveillance, but no longer contains any categorical ban on military use.

DoD contracts

Verified fact (primary source: CDAO announcement, June 2025; official OpenAI blog). OpenAI Public Sector LLC received a contract worth up to $200 million from the CDAO, with an initial obligation of less than $2 million. OpenAI described the scope as administrative operations, military healthcare access, and proactive cyber defense, though the DoD announcement mentioned “warfighting and enterprise domains.” Additionally, in December 2024, OpenAI announced a strategic alliance with Anduril focused on counter-drone systems (CUAS), where OpenAI’s models would be trained on Anduril’s threat data.

ChatGPT on classified vs. unclassified networks

Verified fact (primary sources: OpenAI blog, DoD release, Microsoft Azure Government blog). On unclassified networks, OpenAI deployed a customized version of ChatGPT on GenAI.mil on February 10, 2026, accessible to the DoD’s 3 million civilian and military personnel. On classified networks, OpenAI’s models are available through Microsoft Azure, not directly by OpenAI. Microsoft deployed GPT-4 in an air-gapped Top Secret cloud in May 2024. In April 2025, Azure OpenAI Service was authorized at all classification levels of the US government (IL2-IL6 plus Top Secret ICD 503). The distinction is crucial: OpenAI directly operates only on unclassified networks; it’s Microsoft that provides classified access.


4. Google erased its own red lines on weapons

From Project Maven to dropping AI principles

Google’s trajectory is the most dramatic policy reversal. In 2017-2018, Google participated in Project Maven, a DoD contract to analyze drone imagery with machine learning. More than 3,100 employees signed an open letter demanding Google “not be in the business of war.” Dozens resigned in protest. In June 2018, Google announced it wouldn’t renew the contract and published its AI Principles, explicitly prohibiting “weapons or other technologies whose principal purpose is to cause or directly facilitate injury to people” and “technologies that gather or use information for surveillance violating internationally accepted norms.”

On February 4, 2025, Google quietly removed these prohibitions. The new principles, co-authored by Demis Hassabis (DeepMind CEO) and James Manyika, replaced specific prohibitions with generic commitments to “appropriate human oversight” and ensuring benefits “substantially outweigh foreseeable risks.” The stated justification was “a global competition for AI leadership within an increasingly complex geopolitical landscape.” Human Rights Watch called Google’s shift from refusing to build AI for weapons to supporting national security ventures “stark.” Margaret Mitchell, Google’s former co-lead of ethical AI, warned: “Having removed that means Google will now probably work on deploying technology directly that can kill people.”

Current military contract status

Google currently maintains multiple significant military contracts: the JWCC contract worth $9 billion (shared with AWS, Microsoft, and Oracle) for combat cloud; its own CDAO $200M contract; and Gemini for Government, which was the first AI model deployed on GenAI.mil in December 2025. Google Cloud also achieved IL6 (SECRET level) accreditation in June 2025. Additionally, Google maintains the controversial Project Nimbus with Israel, a $1.2 billion contract (with Amazon) that includes Israel’s Ministry of Defense as a client, over which Google fired more than 50 protesting employees in April 2024.


5. xAI: no published ethical principles and competing for drone swarms

Verified fact (primary sources: CDAO release, official xAI blog, DoD announcement). xAI received its $200 million CDAO contract on July 14, 2025 and was described by NBC News as a “late addition” that “came out of nowhere” without having been under consideration before March 2025. A former Pentagon procurement official, Greg Parham, stated xAI is “far, far, far, far behind” other companies in the government authorization process. On December 22, 2025, the DoD announced the integration of “xAI for Government” into GenAI.mil, with Grok operating at IL5.

Bloomberg report (February 16, 2026, anonymous sources): SpaceX and xAI (now a SpaceX subsidiary following the merger announced in early February 2026) are competing in a secret $100 million Pentagon challenge to develop voice-controlled autonomous drone swarm technology, organized by the Defense Innovation Unit and SOCOM’s Defense Autonomous Warfare Group. This contrasts directly with Elon Musk’s 2015 position, when he co-signed a Future of Life Institute letter calling for a ban on “offensive autonomous weapons beyond meaningful human control.”

xAI has not published formal ethical principles or a usage policy regarding military applications. Unlike Anthropic, OpenAI, and Google, there is no public document from xAI defining restrictions on military use of its models. Its most significant official statement on military use comes from its blog: “Supporting the critical missions of the United States government is a key part of our mission.”

Senator Elizabeth Warren sent a formal letter to Secretary Hegseth in September 2025 questioning the xAI contract, citing the potential for improper benefit from Musk’s access to government data through DOGE, competition concerns, Grok’s misinformation issues, and a July 2025 incident in which Grok generated antisemitic content, calling itself “MechaHitler.”


6. Palantir: the infrastructure connecting Silicon Valley to the battlefield

A web of defense contracts

Palantir, founded in 2003 with seed funding from In-Q-Tel (the CIA’s venture capital arm), has become the indispensable intermediary between commercial AI and military operations. Its major contracts include: the Army enterprise agreement worth up to $10 billion over 10 years (July 2025, source: army.mil); the CDAO’s Maven Smart System at $1.3 billion through 2029 with over 20,000 active users across 35+ military tools; and ShipOS for the Navy at up to $448 million.

Verified financial data (primary source: SEC filing, Q4 2025 earnings release via BusinessWire, February 1, 2026): Palantir reported total revenue of $4.48 billion in fiscal year 2025, with US government revenue of $570 million in Q4 alone (+66% year-over-year). Net income was $608.7 million in Q4, with $7.2 billion in cash and zero debt. Guidance for 2026 projects 61% year-over-year growth.

How Palantir works as a bridge

Palantir acts as an intermediary through several mechanisms. First, it holds elite security accreditations (IL5, IL6, FedRAMP High, Top Secret cloud) that most AI companies lack. Through FedStart and direct alliances, AI models are deployed in classified environments using Palantir’s pre-accredited infrastructure. Second, its ontology layer — a semantic framework mapping how data sources relate to each other — sits between raw government data and AI models, controlling what information models can access. Third, the AIP (Artificial Intelligence Platform), launched in April 2023, integrates multiple AI models (Claude, GPT-4, Llama) in a model-agnostic architecture with AI Guardrails that granularly control what models can see and do, generating a secure digital audit trail of all operations.

Palantir also maintains alliances with Microsoft (since August 2024, as “the first commercial industry partner to deploy Azure OpenAI Service in classified environments”), with Meta (Llama for defense, since November 2024), and with Anduril (consortium since December 2024 to prepare defense data for AI training at SCI and SAP levels).


7. The resignations revealing internal tensions

Mrinank Sharma: the letter that didn’t name the Pentagon

Verified fact (primary source: post on X, February 9, 2026, 14.6 million views). Sharma, leader of Anthropic’s Safeguards Research team since August 2023, published his resignation letter which, while widely cited in the context of the military dispute, is notably vague about specific internal disagreements. His key words: “The world is in danger. And not just from AI, or bioweapons, but from a whole set of interconnected crises.” He added: “Throughout my time here, I have seen repeatedly how hard it is to let our values govern our actions. I have seen this within myself, within the organization, where we constantly face pressures to set aside what matters most.” He did not directly mention military use or accuse Anthropic of specific conduct. Anthropic stated it was “grateful for Sharma’s work advancing AI safety research.”

Other Anthropic departures require nuance. Harsh Mehta and Behnam Neyshabur (both early February 2026) announced their departures praising the company and stating they were going to “start something new.” Dylan Scandinaro moved to OpenAI as Head of Preparedness without publicly criticizing Anthropic. Linking these departures to the military dispute would be speculative; available evidence suggests standard career moves, not protest resignations.

Ryan Beiermeister: a disputed firing at OpenAI

Source: Wall Street Journal, February 10, 2026. Beiermeister, VP of Product Policy at OpenAI, was fired in January 2026 officially over a sexual discrimination allegation by a male colleague, which she categorically denies. Prior to her firing, she had voiced criticism of the planned “Adult Mode” for ChatGPT. OpenAI stated her departure “was not related to any matter she raised.” The connection between her opposition to Adult Mode and her firing is suggestive but not conclusive.

Zoë Hitzig: the most articulate resignation

Verified fact (primary source: post on X and guest essay in the New York Times, February 11, 2026). Hitzig resigned on February 10, 2026 — the same day OpenAI began testing ads in ChatGPT — posting: “OpenAI has the most detailed record of private human thought ever assembled. Can we trust them to resist the forces pushing them to abuse it?” In her NYT essay, she drew explicit parallels with Facebook’s evolution and proposed alternatives to advertising as a revenue model. Her resignation wasn’t directly related to military use but to the monetization of conversational data. This connects to a problem we’ve analyzed before: AI as the new data leak channel is a concern that extends far beyond the military sphere.

A structural pattern documented at OpenAI

Beyond individual departures, OpenAI has dissolved two safety teams in two years: the Superalignment team in May 2024 (following the resignations of Ilya Sutskever and Jan Leike) and the Mission Alignment team in February 2026. This is the strongest structural indicator of safety deprioritization. Leike stated in his resignation that “safety culture and processes have taken a back seat to shiny products.”


Existing US regulation

Primary source: DoD Directive 3000.09 (original November 2012, updated January 2023). The directive on autonomy in weapons systems doesn’t explicitly prohibit lethal autonomous weapons systems (LAWS) but requires all systems to allow commanders and operators to “exercise appropriate levels of human judgment over the use of force” and mandates senior-level reviews before development or deployment. The FY2026 NDAA (signed December 2025) added specific requirements: prohibits AI acquisition from adversary nations (China, Russia, Iran, North Korea, explicitly banning DeepSeek); mandates a comprehensive cybersecurity and governance policy for all AI/ML systems within 180 days; and requires a cross-functional team for AI model evaluation by June 2026.

The DeepSeek ban in the US military context is another angle of the technological sovereignty dilemma we explored in our analysis of DeepSeek and data sovereignty — geopolitical decisions already determine which AI you can use.

The Trump administration revoked Biden’s AI executive order (EO 14110) on day one, replacing it with EO 14179 focused on “removing barriers to American AI leadership.” The status of Biden’s restrictions on AI use in national security (such as the ban on automating nuclear weapons under NSM-25) is unclear under the current administration.

No established legal framework assigns specific liability when commercial AI is used in military operations with casualties. International Humanitarian Law imposes obligations on persons, not weapons systems. Per the DoD Law of War Manual, commanders and operators are legally responsible. However, multiple academics identify a “tripartite accountability gap”: developers claim they designed systems to specifications; operators claim lack of real-time control; commanders invoke reasonable reliance on certified systems. The US government generally enjoys sovereign immunity under the Federal Tort Claims Act, with exceptions, and military operations abroad are typically excluded. Commercial AI providers could face liability under product liability theories (design defect, failure to warn), but this hasn’t been tested in court for military AI.

The EU AI Act expressly excludes military use

Verified fact (primary source: EU Regulation 2024/1689, Article 2(3), Recital 24). The AI Act “shall not apply to AI systems where and insofar as placed on the market, put into service, or used with or without modification exclusively for military, defence or national security purposes.” However, if an AI system developed for military purposes is subsequently used for civilian purposes, it does fall within the regulation’s scope. The national security exemption was a late addition during trilogue negotiations and has been criticized by some jurists as contradicting prior EU jurisprudence.


9. The statements defining the new policy

Pete Hegseth redefines “responsible AI”

Primary source: official DoD AI strategy memorandum, January 9, 2026 (media.defense.gov); speech at SpaceX Starbase, January 12, 2026 (corroborated by AP, DefenseScoop, Breaking Defense). Hegseth declared: “Responsible AI in the Department of War means objectively truthful AI capabilities, deployed safely and within the laws governing the department’s activities. We will not employ AI models that don’t let you fight wars.” The memo establishes seven “Pace-Setting Projects,” including Swarm Forge (AI-enabled combat), Agent Network (AI-enabled battle management), and Ender’s Foundry (AI-enabled simulation). It mandates that the latest AI models be deployed “within 30 days of public release” and orders new contractual language permitting “any lawful use” across all AI contracts within 180 days. It defines “responsible AI” as AI free from “‘ideological’ tuning.”

The “Agent Network” concept for battle management connects directly to the evolution of AI agents we analyzed here — the difference being these agents don’t manage support tickets, but military operations.

Dario Amodei: defense yes, autocracy no

Primary source: essay “The Adolescence of Technology” on darioamodei.com, January 26, 2026. Amodei warned of swarms of “millions or billions of fully automated armed drones, controlled locally by powerful AI and coordinated strategically by even more powerful AI” that could constitute “an invincible army.” His stated formula: “We should use AI for national defense in every way except those that would make us more like our autocratic adversaries.” He advocated blocking chip exports to China during the “critical 2025-2027 window” and arming democracies with AI “carefully and within limits.”

Sam Altman: “never say never”

Source: statements at the Vanderbilt University Summit on Modern Conflict, April 10, 2025 (reported by Bloomberg, Washington Times). Altman stated on weapons development: “I’ll never say never, because the world could get really weird, and at that point, you just have to look at what’s going on and say ‘let’s make a trade-off between some really bad options.’” He added: “I don’t think most of the world wants AI making decisions about weapons,” but also: “We have to and are proud of and really want to participate in areas of national security.” OpenAI reinforced its institutional pivot by adding retired General Paul Nakasone, former NSA director, to its board.


10. Comparative table: who allows what

DimensionAnthropicOpenAIGooglexAI
Original military use banYes (in AUP)Yes (removed Jan. 2024)Yes (removed Feb. 2025)Never published restrictions
Current weapons restrictionsBans autonomous weapons and mass surveillanceBans “developing or using weapons” (with exceptions)No explicit ban since Feb. 2025No published policy
CDAO contract ($200M)Yes (Jul. 2025)Yes (Jun. 2025)Yes (Jul. 2025)Yes (Jul. 2025)
Model on GenAI.mil (unclassified)Not reportedChatGPT (Feb. 2026)Gemini (Dec. 2025, first)Grok (Dec. 2025)
Classified network accessOnly available model (via Palantir IL6)Via Microsoft Azure (all levels)Google Distributed Cloud (IL6+)In development (IL5 declared)
Accepts “all lawful use” termsNo — maintains 2 red linesReported yes (unclassified)Reported yes (unclassified)Reported yes
Published ethical principlesDetailed Usage PolicyUsage policy (no military mention)New generic principles (Feb. 2025)None
Defense contractor alliancePalantir (Nov. 2024)Anduril (Dec. 2024)Multiple (JWCC, Nimbus)SpaceX (merger Feb. 2026)
Documented internal protests”Internal unease” (anonymous source)Multiple safety resignations3,100+ Maven signatories; 50+ fired over NimbusNone reported

Conclusion: the fork that will define the military AI era

The situation in February 2026 marks a turning point. Anthropic is the only company among the four resisting the Pentagon’s terms, but its position is precarious: its model is the only one available on classified networks, making it temporarily indispensable but also the primary target of political pressure. The other three companies have progressively yielded: Google erased its weapons principles, OpenAI deleted its military ban, and xAI never established restrictions.

The most revealing case is Google’s evolution: from 3,100 employees protesting Project Maven in 2018 to being the first model on GenAI.mil in 2025, via the quiet elimination of its weapons principles. This seven-year arc suggests that commercial and geopolitical pressure has proven irresistible for AI companies, regardless of their initial stances. It’s the same dynamic we saw with the $7 trillion AI bubble — when this much money is at stake, principles get negotiated.

Three critical voids remain unresolved. First, no clear legal framework exists for liability when commercial AI is used in operations with casualties; liability theoretically falls on the human chain of command, but the “tripartite gap” identified by academics persists. Second, Claude’s exact role in Operation Absolute Resolve remains officially unconfirmed; neither the Pentagon, Anthropic, nor Palantir have provided specific details. Third, Hegseth’s redefinition of “responsible AI” — which eliminates any restriction beyond existing law — has not been tested against the DoD’s own existing frameworks, including the Ethical AI Principles adopted in 2020, which emphasize fairness, traceability, and governability.

What’s at stake isn’t simply a $200 million contract. It’s whether the companies building the most powerful technology of the era can maintain any ethical boundary that military power cannot override, or whether the logic of geopolitical competition will ultimately subordinate every safeguard to the demand for “any lawful use.”


February 2026

Sources: Anthropic (official releases, AUP), CDAO/DoD (contract announcements, Directive 3000.09, NDAA FY2026), Palantir (BusinessWire, SEC filings), OpenAI (official blog, usage policies), Google (AI Principles, releases), xAI (official blog), Wall Street Journal, Axios, Bloomberg, The Intercept, CNBC, AP, DefenseScoop, Breaking Defense, Human Rights Watch, Airwars, New York Times. Editorial opinions are the author’s.


Keep exploring

Found this useful? Share it

Share:

You might also like