US Unveils New AI Guidelines Amid Growing Dispute With Anthropic

3
New AI Guidelines Amid Regulatory Clash With Anthropic
AI Guidelines After Clash With Anthropic

Anthropic is at the center of a dispute with the Pentagon as the Trump administration draws up strict rules for civilian artificial-intelligence contracts requiring companies to allow “any lawful” use of their models, the Financial Times reported on Friday.

The Pentagon designated Anthropic a “supply-chain risk” on Thursday, barring government contractors from using the AI firm’s technology in work for the U.S. military. ​That followed a months-long dispute over the company’s insistence on safeguards ​that the Defense Department says went too far.

See Also: Make Money Online in 2026 (7+ Legit Ways)

See Also: Saving Money, Stress Less: A Practical Guide to Save More

A draft of ⁠the guidelines reviewed by the FT says AI groups seeking business ​with the government must grant the U.S. an irrevocable license to use their ​systems for all legal purposes.

The guidance from the General Services Administration would apply to civilian contracts and is part of a broader government-wide effort to strengthen AI services procurement, ​the newspaper reported, adding that it mirrors measures the Pentagon is ​considering for military contracts.

Anthropic lost the Pentagon but won over America

“It would be irresponsible to the American people and dangerous to our ‌nation ⁠for GSA to maintain a business relationship with Anthropic,” Josh Gruenbaum, commissioner of the Federal Acquisition Service, a GSA subsidiary that helps procure software for the federal government, told Reuters by email.

“As directed by the President, GSA ​has terminated Anthropic’s ​OneGov deal – ending ⁠their availability to the Executive, Legislative, and Judicial branches through GSA’s pre-negotiated contracts,” Gruenbaum said.

The White House did ​not immediately respond to requests for comment from Reuters.

The GSA ​draft ⁠mandates that contractors “must not intentionally encode partisan or ideological judgments into the AI systems data outputs,” the FT reported.

It requires companies to disclose whether their models ⁠have ​been “modified or configured to comply with any non-U.S. ​federal government or commercial compliance or regulatory framework,” the newspaper said”.

See Also: Anthropic lost the Pentagon but won over America

Leave a Reply

Your email address will not be published. Required fields are marked *

Related Articles

Tik Tok

TikTok US Operations Shift to New Joint Venture

The social media landscape is experiencing a significant shift as ByteDance, the...

Apple Announces New iOS Features and Changes in Japan

Apple updates offer enhanced options for developers to distribute apps and handle...

Today’s Money Tip 

Shop the Store Where Lost Airline Luggage Ends Up Ever wonder what...

How to Launch a Successful AI Startup: Essential Steps, Tools, and Strategies for 2025

AI startups say the promise of turning dazzling models into useful products...