About Anthropic
Anthropic’s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.About the role
Anthropic is an AI safety and research company working to build reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our customers and society as a whole. As a Safeguards Enforcement Lead focusing on frontier model abuse, you will serve as the dedicated program owner for one of the most consequential and fast-moving abuse vectors on our platform: actors who misuse Anthropic's models to train competing AI systems in violation of our usage policies and terms of service. Safety is core to our mission, and you'll own how we identify, investigate, and act against this category of harm — from developing the detection playbook to seeing enforcement cases through to resolution.
Important Context: In this position, you may be exposed to and engage with explicit content spanning a range of topics, including those of a sexual, violent, or psychologically disturbing nature. There is also an on-call responsibility across the Policy and Enforcement teams.
Responsibilities:
Own the end-to-end enforcement strategy against unauthorized frontier model abuse — from detection signal development through enforcement action and post-enforcement measurement
Operationalize detection and review pipelines — translating leads from case investigations and detection outputs into structured review workflows that scale across surfaces
Drive enforcement actions for high-priority actors — synthesizing signals from across intelligence, detection, and review into enforcement packages suitable for formal escalation
Partner with Legal and Policy to assess case strength, characterize ToS and IP violations, and support enforcement escalations through formal channels
Close the loop between enforcement outcomes and upstream improvements — channeling review results and enforcement findings back to policy updates and detection refinements
Develop and maintain a dynamic enforcement framework that accounts for the complexity of cross-surface enforcement — including varied escalation paths, partner coordination, and enforcement consistency across surfaces
Collaborate with Threat Intelligence, Research, Engineering, and Policy partners to ensure detection coverage keeps pace with evolving frontier abuse tactics
Maintain rigorous documentation of enforcement decisions, pipeline logic, and precedents to build institutional knowledge
Qualifications:
Required
5+ years of experience in trust & safety, abuse enforcement, fraud investigation, policy, or a related field — with demonstrated ownership of complex, high-stakes enforcement programs
Track record of building detection and enforcement approaches for novel or emerging abuse vectors where established playbooks don't exist
Experience supporting or directly contributing to formal enforcement actions — including case documentation, evidence packaging, and escalation coordination
Strong data analysis skills — comfortable navigating complex, multi-table datasets to surface behavioral patterns and support investigations
Experience conducting structured investigations, including open-source intelligence techniques and cross-referencing external data sources to attribute activity
Demonstrated ability to translate ambiguous policy questions into defensible enforcement decisions and clear written findings
Strong written and verbal communication skills — able to present complex enforcement cases clearly to stakeholders across Legal, Policy, and Engineering
Preferred
Familiarity with the AI/ML ecosystem — including how model distillation, fine-tuning, and synthetic data generation work in practice, and how actors attempt to obscure this activity
Experience conducting threat actor profiling or open-source investigations in a trust & safety, intelligence, or legal context
Experience working with generative AI products, including using AI tools to accelerate investigative and analytical workflows
Background or interest in AI policy, IP enforcement, competitive intelligence, or AI governance
Experience coordinating with external enforcement partners or platform partners on escalated enforcement actions
Education: At least a Bachelor's degree in a relevant field, or equivalent experience.The annual compensation range for this role is listed below.
For sales roles, the range provided is the role’s On Target Earnings ("OTE") range, meaning that the range includes both the sales commissions/sales bonuses target and annual base salary for the role.Annual Salary:$230,000—$270,000 USDLogistics
Education requirements: We require at least a Bachelor's degree in a related field or equivalent experience.Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.
Visa sponsorship: We do sponsor visas! However, we aren't able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.
We encourage you to apply even if you do not believe you meet every single qualification. Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you're interested in this work. We think AI systems like the ones we're building have enormous social and ethical implications. We think this makes representation even more important, and we strive to include a range of diverse perspectives on our team.Your safety matters to us. To protect yourself from potential scams, remember that Anthropic recruiters only contact you from @anthropic.com email addresses. In some cases, we may partner with vetted recruiting agencies who will identify themselves as working on behalf of Anthropic. Be cautious of emails from other domains. Legitimate Anthropic recruiters will never ask for money, fees, or banking information before your first day. If you're ever unsure about a communication, don't click any links—visit anthropic.com/careers directly for confirmed position openings.
How we're different
We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact — advancing our long-term goals of steerable, trustworthy AI — rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science. We're an extremely collaborative group, and we host frequent research discussions to ensure that we are pursuing the highest-impact work at any given time. As such, we greatly value communication skills.
The easiest way to understand our research directions is to read our recent research. This research continues many of the directions our team worked on prior to Anthropic, including: GPT-3, Circuit-Based Interpretability, Multimodal Neurons, Scaling Laws, AI & Compute, Concrete Problems in AI Safety, and Learning from Human Preferences.
Come work with us!
Anthropic is a public benefit corporation headquartered in San Francisco. We offer competitive compensation and benefits, optional equity donation matching, generous vacation and parental leave, flexible working hours, and a lovely office space in which to collaborate with colleagues. Guidance on Candidates' AI Usage: Learn about our policy for using AI in our application process