Code Velocity
AI Models

Anthropic Claude: Consumer Terms & Privacy Policy Updates

·5 min read·Anthropic·Original source
Share
Anthropic Claude Consumer Terms and Privacy Policy Update notification screen

Anthropic Rolls Out Key Updates to Claude's Consumer Terms and Privacy Policy

Anthropic, a leading AI research company, has announced significant updates to its Consumer Terms and Privacy Policy for users of its Claude AI models. These changes, effective August 28, 2025, are designed to empower users with greater control over their data while simultaneously enabling Anthropic to develop more capable and safer AI systems. The core of these updates centers on an opt-in mechanism for data usage in model training and an extended data retention period for those who participate.

The move reflects a growing industry trend towards greater transparency and user agency in the development of artificial intelligence. By allowing users to actively choose whether their interactions contribute to Claude's learning, Anthropic aims to foster a collaborative environment that benefits both individual users and the broader AI ecosystem. This strategic evolution of consumer-facing policies underscores the company's commitment to responsible AI development and user trust.

Enhancing Claude with User-Driven Insights and Safeguards

The primary change in Anthropic's updated policy is the introduction of a user choice regarding data utilization for model improvement. Users on Claude's Free, Pro, and Max plans, including those leveraging Claude Code from associated accounts, will now have the option to allow their data to contribute to the training of future Claude models. This participation is positioned as a crucial step towards building more robust and intelligent AI.

Opting into this data usage offers several direct benefits. According to Anthropic, user interactions provide valuable real-world insights that help refine model safety protocols, making the systems for detecting harmful content more accurate and less prone to flagging innocuous conversations. Beyond safety, user data is expected to significantly improve Claude's core capabilities, such as coding proficiency, analytical reasoning, and complex problem-solving skills. This feedback loop is essential for the continuous evolution of large language models, leading to more refined and useful AI tools for all.

It is important to note the specific scope of these updates. While applicable to consumer-tier accounts, these policy changes explicitly do not extend to services governed by Anthropic’s Commercial Terms. This includes Claude for Work, Claude for Government, Claude for Education, and all API usage, whether directly or through third-party platforms like Amazon Bedrock or Google Cloud’s Vertex AI. This distinction ensures that commercial clients and enterprise-level partners maintain their existing, often bespoke, data agreements and privacy frameworks. For users leveraging services like Amazon Bedrock AgentCore, separate agreements remain in place.

Anthropic is committed to providing users with clear control over their data. Both new and existing Claude users will encounter distinct processes for making their data-sharing choices. New users signing up for Claude will find the option to select their preference for model training as an integral part of the onboarding process, allowing them to define their privacy settings from the outset.

For existing users, Anthropic has initiated a phased rollout of in-app notifications. These pop-up windows will prompt users to review the updated Consumer Terms and Privacy Policy and decide whether to allow their data to be used for model improvement. Users have a grace period until October 8, 2025, to make their selection. If an existing user chooses to accept the new policies and opt-in before this deadline, these changes will take effect immediately for all new or resumed chats and coding sessions. It's crucial for users to make a choice by the specified date, as continued use of Claude after October 8, 2025, will necessitate a selection on the model training setting. This ensures that users are actively engaged in shaping their privacy landscape.

Crucially, user control is not a one-time decision. Anthropic emphasizes that preferences can be adjusted at any point through the dedicated Privacy Settings section within the Claude interface. This flexibility underscores the company's commitment to ongoing user autonomy regarding their personal data.

Policy Comparison: Data Usage and Retention

To clarify the impact of these updates, the following table summarizes the key differences between opting in and opting out of data usage for model training under the new Consumer Terms:

FeatureOpt-in for Model Training (New Policy)Opt-out (Existing/Default Policy)
Data UsageNew/resumed chats & coding sessions used for model improvement & safety.New/resumed chats & coding sessions not used for model training.
Data Retention Period5 years for opted-in data.30 days for all data.
Applies ToClaude Free, Pro, Max accounts & Claude Code sessions.Claude Free, Pro, Max accounts & Claude Code sessions.
ExclusionsCommercial Terms, API, Amazon Bedrock, Google Vertex AI services.Same exclusions.

Strategic Data Retention for Long-Term AI Development

Alongside the opt-in for model training, Anthropic is also introducing an extended data retention period for users who choose to participate. If a user opts to allow their data for model improvement, the retention period for new or resumed chats and coding sessions will be extended to five years. For users who do not opt-in, the existing 30-day data retention period will continue to apply. This extended retention also covers feedback submitted about Claude's responses to prompts.

The rationale behind the five-year retention period is deeply rooted in the realities of advanced AI development. Large language models undergo development cycles that often span 18 to 24 months before release. Maintaining data consistency over longer periods is vital for creating more stable and predictable models. Consistent data allows models to be trained and refined in similar ways, leading to smoother transitions and upgrades for users.

Furthermore, the extended retention significantly aids in improving Anthropic’s internal classifiers—the sophisticated systems used to identify and counter misuse, abuse, spam, and other harmful patterns. These safety mechanisms become more effective by learning from data collected over extended durations, enhancing Claude's ability to remain a safe and beneficial tool for everyone. Anthropic is also committed to broader AI safety discussions, as highlighted by initiatives like The Anthropic Institute.

To protect user privacy, Anthropic employs a combination of advanced tools and automated processes to filter or obfuscate sensitive data before it is used for any model training or analysis. The company emphatically states that it does not sell user data to third parties, reinforcing its commitment to privacy even with the extended retention for opted-in data. Users maintain control over their data even after opting in; deleting a specific conversation ensures it will not be used for future model training.

User Empowerment and Data Governance

Anthropic's updated Consumer Terms and Privacy Policy underscore a user-centric approach to AI development. By putting the choice to contribute to model improvement directly in the hands of the user, the company aims to foster a more transparent and collaborative relationship. The ability to modify these preferences at any time ensures that users retain continuous control over their data's journey.

Should a user initially opt-in for model training but later decide to change their mind and opt-out, Anthropic has a clear policy. While data used in previously completed model training runs and already released models may still exist within those versions, any new chats and coding sessions after the opt-out decision will not be used for future training. The company commits to discontinuing the use of previously stored chats and coding sessions in any future model training iterations once the preference is updated. This offers a robust mechanism for users to manage their privacy settings dynamically.

These updates represent a measured step towards balancing the immense potential of data-driven AI innovation with the paramount importance of user privacy and transparency. As AI models become increasingly integrated into daily life, policies like these are crucial for building trust and ensuring ethical development.

Frequently Asked Questions

What are the core changes introduced in Anthropic's updated Consumer Terms and Privacy Policy for Claude users?
Anthropic is rolling out significant updates to its Consumer Terms and Privacy Policy, primarily centered around empowering users with the choice to allow their data to be used for improving Claude AI models. When this setting is enabled, data from Free, Pro, and Max accounts, including Claude Code sessions, will contribute to training new models. This aims to enhance model safety, accuracy, and capabilities in areas like coding, analysis, and reasoning. Additionally, for users who opt-in, the data retention period will be extended to five years. Conversely, if users choose not to opt-in for model training, the existing 30-day data retention policy will remain in effect. These changes do not impact commercial users under specific enterprise agreements or API usage, ensuring distinct policies for different user segments.
Which Claude services and user types are affected by these new policy updates, and which are not?
The updated Consumer Terms and Privacy Policy specifically apply to individual users on Anthropic's Claude Free, Pro, and Max plans. This includes instances where users interact with Claude Code through accounts associated with these consumer plans. However, it's crucial to understand that these updates do not extend to services operating under Anthropic's Commercial Terms. This means that Claude for Work (including Team and Enterprise plans), Claude for Government, Claude for Education, and all API uses—whether direct or via third-party platforms such as Amazon Bedrock or Google Cloud’s Vertex AI—are explicitly excluded from these new consumer-focused policy changes. Commercial users and API integrators will continue to operate under their existing agreements, which have separate privacy and data handling provisions.
Why is Anthropic making these changes, particularly regarding data usage for model training and extended retention?
Anthropic states that these changes are driven by the fundamental need to continuously improve the capability, safety, and utility of large language models like Claude. Real-world interactions provide invaluable data signals that help models learn which responses are most helpful, accurate, and safe. For example, developers debugging code with Claude offer crucial insights for improving future coding capabilities. The extended five-year data retention, specifically for users who opt-in, is designed to support the long development cycles of AI, ensuring data consistency for more stable model upgrades and enhancing the effectiveness of internal classifiers that detect and mitigate harmful usage patterns such as abuse or spam over longer periods, ultimately making Claude safer for everyone.
What steps do existing and new Claude users need to take concerning these policy updates and their data preferences?
Existing Claude users will receive an in-app notification prompting them to review the updates and make a decision regarding whether to allow their chats and coding sessions to be used for model improvement. Users have until October 8, 2025, to make this selection. If they choose to accept earlier, the new policies apply immediately to new or resumed chats. After the deadline, making a selection is mandatory to continue using Claude. New users will be presented with this choice as part of the initial signup process. Regardless of whether one is a new or existing user, the preference can be modified at any time through the dedicated Privacy Settings within the Claude interface, offering continuous control over personal data usage and ensuring transparency.
What happens if a user initially allows data for model training but later decides to change their mind and opt-out?
Anthropic emphasizes that users retain full control over their data preferences and can update their selection at any time through their Privacy Settings. If a user initially opts-in for model training and later decides to turn this setting off, any *new* chats and coding sessions with Claude will no longer be used for future model training runs. It's important to note, however, that data from previously completed model training runs and models that have already been released may still contain data from their past interactions. Nevertheless, Anthropic commits to stopping the use of those previously stored chats and coding sessions in any *future* model training iterations once the opt-out preference is registered, thereby respecting the user's updated choice for ongoing data usage and privacy.
How does Anthropic ensure user privacy and data protection with these new policies, especially with extended retention?
To safeguard user privacy, Anthropic employs a combination of advanced tools and automated processes designed to filter or obfuscate sensitive data before it is used for model training, even with the extended retention period for opted-in users. The company explicitly states that it does not sell user data to third parties, reinforcing its commitment to privacy. Furthermore, users maintain control: deleting a conversation ensures it won't be used for future model training, and changing the opt-in setting prevents new chats from being used. These measures aim to balance the need for data-driven model improvement with stringent privacy protections, ensuring user trust and data security throughout the AI development lifecycle. This comprehensive approach underscores Anthropic's dedication to responsible AI practices.

Stay Updated

Get the latest AI news delivered to your inbox.

Share