Anthropic Rolls Out Key Updates to Claude's Consumer Terms and Privacy Policy
Anthropic, a leading AI research company, has announced significant updates to its Consumer Terms and Privacy Policy for users of its Claude AI models. These changes, effective August 28, 2025, are designed to empower users with greater control over their data while simultaneously enabling Anthropic to develop more capable and safer AI systems. The core of these updates centers on an opt-in mechanism for data usage in model training and an extended data retention period for those who participate.
The move reflects a growing industry trend towards greater transparency and user agency in the development of artificial intelligence. By allowing users to actively choose whether their interactions contribute to Claude's learning, Anthropic aims to foster a collaborative environment that benefits both individual users and the broader AI ecosystem. This strategic evolution of consumer-facing policies underscores the company's commitment to responsible AI development and user trust.
Enhancing Claude with User-Driven Insights and Safeguards
The primary change in Anthropic's updated policy is the introduction of a user choice regarding data utilization for model improvement. Users on Claude's Free, Pro, and Max plans, including those leveraging Claude Code from associated accounts, will now have the option to allow their data to contribute to the training of future Claude models. This participation is positioned as a crucial step towards building more robust and intelligent AI.
Opting into this data usage offers several direct benefits. According to Anthropic, user interactions provide valuable real-world insights that help refine model safety protocols, making the systems for detecting harmful content more accurate and less prone to flagging innocuous conversations. Beyond safety, user data is expected to significantly improve Claude's core capabilities, such as coding proficiency, analytical reasoning, and complex problem-solving skills. This feedback loop is essential for the continuous evolution of large language models, leading to more refined and useful AI tools for all.
It is important to note the specific scope of these updates. While applicable to consumer-tier accounts, these policy changes explicitly do not extend to services governed by Anthropic’s Commercial Terms. This includes Claude for Work, Claude for Government, Claude for Education, and all API usage, whether directly or through third-party platforms like Amazon Bedrock or Google Cloud’s Vertex AI. This distinction ensures that commercial clients and enterprise-level partners maintain their existing, often bespoke, data agreements and privacy frameworks. For users leveraging services like Amazon Bedrock AgentCore, separate agreements remain in place.
Navigating Your Choices: Opt-in and Deadlines
Anthropic is committed to providing users with clear control over their data. Both new and existing Claude users will encounter distinct processes for making their data-sharing choices. New users signing up for Claude will find the option to select their preference for model training as an integral part of the onboarding process, allowing them to define their privacy settings from the outset.
For existing users, Anthropic has initiated a phased rollout of in-app notifications. These pop-up windows will prompt users to review the updated Consumer Terms and Privacy Policy and decide whether to allow their data to be used for model improvement. Users have a grace period until October 8, 2025, to make their selection. If an existing user chooses to accept the new policies and opt-in before this deadline, these changes will take effect immediately for all new or resumed chats and coding sessions. It's crucial for users to make a choice by the specified date, as continued use of Claude after October 8, 2025, will necessitate a selection on the model training setting. This ensures that users are actively engaged in shaping their privacy landscape.
Crucially, user control is not a one-time decision. Anthropic emphasizes that preferences can be adjusted at any point through the dedicated Privacy Settings section within the Claude interface. This flexibility underscores the company's commitment to ongoing user autonomy regarding their personal data.
Policy Comparison: Data Usage and Retention
To clarify the impact of these updates, the following table summarizes the key differences between opting in and opting out of data usage for model training under the new Consumer Terms:
| Feature | Opt-in for Model Training (New Policy) | Opt-out (Existing/Default Policy) |
|---|---|---|
| Data Usage | New/resumed chats & coding sessions used for model improvement & safety. | New/resumed chats & coding sessions not used for model training. |
| Data Retention Period | 5 years for opted-in data. | 30 days for all data. |
| Applies To | Claude Free, Pro, Max accounts & Claude Code sessions. | Claude Free, Pro, Max accounts & Claude Code sessions. |
| Exclusions | Commercial Terms, API, Amazon Bedrock, Google Vertex AI services. | Same exclusions. |
Strategic Data Retention for Long-Term AI Development
Alongside the opt-in for model training, Anthropic is also introducing an extended data retention period for users who choose to participate. If a user opts to allow their data for model improvement, the retention period for new or resumed chats and coding sessions will be extended to five years. For users who do not opt-in, the existing 30-day data retention period will continue to apply. This extended retention also covers feedback submitted about Claude's responses to prompts.
The rationale behind the five-year retention period is deeply rooted in the realities of advanced AI development. Large language models undergo development cycles that often span 18 to 24 months before release. Maintaining data consistency over longer periods is vital for creating more stable and predictable models. Consistent data allows models to be trained and refined in similar ways, leading to smoother transitions and upgrades for users.
Furthermore, the extended retention significantly aids in improving Anthropic’s internal classifiers—the sophisticated systems used to identify and counter misuse, abuse, spam, and other harmful patterns. These safety mechanisms become more effective by learning from data collected over extended durations, enhancing Claude's ability to remain a safe and beneficial tool for everyone. Anthropic is also committed to broader AI safety discussions, as highlighted by initiatives like The Anthropic Institute.
To protect user privacy, Anthropic employs a combination of advanced tools and automated processes to filter or obfuscate sensitive data before it is used for any model training or analysis. The company emphatically states that it does not sell user data to third parties, reinforcing its commitment to privacy even with the extended retention for opted-in data. Users maintain control over their data even after opting in; deleting a specific conversation ensures it will not be used for future model training.
User Empowerment and Data Governance
Anthropic's updated Consumer Terms and Privacy Policy underscore a user-centric approach to AI development. By putting the choice to contribute to model improvement directly in the hands of the user, the company aims to foster a more transparent and collaborative relationship. The ability to modify these preferences at any time ensures that users retain continuous control over their data's journey.
Should a user initially opt-in for model training but later decide to change their mind and opt-out, Anthropic has a clear policy. While data used in previously completed model training runs and already released models may still exist within those versions, any new chats and coding sessions after the opt-out decision will not be used for future training. The company commits to discontinuing the use of previously stored chats and coding sessions in any future model training iterations once the preference is updated. This offers a robust mechanism for users to manage their privacy settings dynamically.
These updates represent a measured step towards balancing the immense potential of data-driven AI innovation with the paramount importance of user privacy and transparency. As AI models become increasingly integrated into daily life, policies like these are crucial for building trust and ensuring ethical development.
Original source
https://www.anthropic.com/news/updates-to-our-consumer-termsFrequently Asked Questions
What are the core changes introduced in Anthropic's updated Consumer Terms and Privacy Policy for Claude users?
Which Claude services and user types are affected by these new policy updates, and which are not?
Why is Anthropic making these changes, particularly regarding data usage for model training and extended retention?
What steps do existing and new Claude users need to take concerning these policy updates and their data preferences?
What happens if a user initially allows data for model training but later decides to change their mind and opt-out?
How does Anthropic ensure user privacy and data protection with these new policies, especially with extended retention?
Stay Updated
Get the latest AI news delivered to your inbox.
