Browse By Department
AI Safety Engineering: Testing and Red-Teaming Language Models for Product Teams (Paperback)
What happens when your AI product encounters its first adversarial attack-are you prepared or just hopeful? Your language model is live serving thousands of users. While you monitor performance metrics bad actors are probing for prompt injection...
$29.99 Delivery: $null