AI Security

AI Agents: Resisting Prompt Injection with Social Engineering
Learn how OpenAI designs AI agents to resist advanced prompt injection attacks by leveraging social engineering defense strategies, ensuring robust AI security and data privacy.

OpenAI Acquires Promptfoo to Boost AI Security & Testing
OpenAI strengthens its AI security capabilities by acquiring Promptfoo, integrating its advanced testing and evaluation tools into OpenAI Frontier to secure enterprise AI deployments.

AI-Powered Security: GitHub's Open-Source Vulnerability Scanning Framework
Explore GitHub Security Lab's open-source, AI-powered Taskflow Agent, a revolutionary framework for enhanced vulnerability scanning. Learn to deploy this tool to uncover high-impact security vulnerabilities in your projects efficiently.

OpenAI Privacy Portal: User Data Control Simplified
OpenAI's new Privacy Portal empowers users with robust data control, allowing management of personal data, account settings, model training preferences, and removal of information from ChatGPT responses.

OpenAI Department of War Agreement: Ensuring AI Safety Guardrails
OpenAI details its landmark agreement with the Department of War, establishing robust AI safety guardrails against domestic surveillance and autonomous weapons, setting a new standard for defense technology.

Anthropic Defies War Sec on AI, Cites Rights and Safety
Anthropic defies Department of War's supply chain risk designation, standing firm on ethical AI use, banning mass domestic surveillance and unreliable autonomous weapons.

AI Security: Disrupting Malicious AI Uses
OpenAI details strategies for disrupting malicious AI uses, providing insights from recent threat reports. Learn how threat actors combine AI with traditional tools for sophisticated attacks.

Anthropic Exposes Distillation Attacks by DeepSeek and MiniMax
Anthropic reveals DeepSeek, Moonshot, and MiniMax ran 16M illicit exchanges to distill Claude's capabilities. How the attacks worked and why they matter.