Code Velocity

AI Security

OpenAI AI agents resisting prompt injection and social engineering attacks
AI Security

AI Agents: Resisting Prompt Injection with Social Engineering

Learn how OpenAI designs AI agents to resist advanced prompt injection attacks by leveraging social engineering defense strategies, ensuring robust AI security and data privacy.

·5 min read
OpenAI and Promptfoo logos symbolizing their acquisition to enhance AI security and testing
AI Security

OpenAI Acquires Promptfoo to Boost AI Security & Testing

OpenAI strengthens its AI security capabilities by acquiring Promptfoo, integrating its advanced testing and evaluation tools into OpenAI Frontier to secure enterprise AI deployments.

·5 min read
Diagram illustrating GitHub Security Lab's AI-powered vulnerability scanning Taskflow Agent workflow
AI Security

AI-Powered Security: GitHub's Open-Source Vulnerability Scanning Framework

Explore GitHub Security Lab's open-source, AI-powered Taskflow Agent, a revolutionary framework for enhanced vulnerability scanning. Learn to deploy this tool to uncover high-impact security vulnerabilities in your projects efficiently.

·7 min read
OpenAI Privacy Portal dashboard showing options for user data control and AI privacy management.
AI Security

OpenAI Privacy Portal: User Data Control Simplified

OpenAI's new Privacy Portal empowers users with robust data control, allowing management of personal data, account settings, model training preferences, and removal of information from ChatGPT responses.

·5 min read
OpenAI and Department of War agreement with AI safety guardrails
AI Security

OpenAI Department of War Agreement: Ensuring AI Safety Guardrails

OpenAI details its landmark agreement with the Department of War, establishing robust AI safety guardrails against domestic surveillance and autonomous weapons, setting a new standard for defense technology.

·7 min read
Anthropic's official statement regarding the Department of War's potential supply chain risk designation over AI ethics.
AI Security

Anthropic Defies War Sec on AI, Cites Rights and Safety

Anthropic defies Department of War's supply chain risk designation, standing firm on ethical AI use, banning mass domestic surveillance and unreliable autonomous weapons.

·4 min read
Cybersecurity shield over AI circuits, representing OpenAI's efforts in disrupting malicious AI uses
AI Security

AI Security: Disrupting Malicious AI Uses

OpenAI details strategies for disrupting malicious AI uses, providing insights from recent threat reports. Learn how threat actors combine AI with traditional tools for sophisticated attacks.

·4 min read
Diagram showing distillation attack flow from frontier AI model to illicit copies through fraudulent account networks
AI Security

Anthropic Exposes Distillation Attacks by DeepSeek and MiniMax

Anthropic reveals DeepSeek, Moonshot, and MiniMax ran 16M illicit exchanges to distill Claude's capabilities. How the attacks worked and why they matter.

·4 min read