Product Strategy And RoadmappingPrototyping With Claude / LlmsUser Research And TestingStakeholder Alignment And Cross Functional CollaborationProgram And Project ManagementSpec Writing And PresentationTranslating Research Into ProductExperimentation And Metrics Driven Decision Making+2
About Anthropic
Anthropic’s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.About the role
We believe skill with AI is fundamental to human agency. Education Labs sits inside Product Research, which means we build learning products with a front-row seat to what's coming. We're close to the people shaping Claude's next capabilities, and our job is to make sure the world can actually use them.
We're looking for a senior product manager to own our most ambitious bet: building learning products and experiences that are native to frontier AI. What does education look like when the system teaching you can reason, adapt, and grow alongside you? We think the answer looks very different from what exists today, and we want someone who's energized by figuring that out with us.
We're skeptical of completion rates and course catalogs as the true measure of success. We build AI fluency that creates agency, not atrophy — skills that grow in value as capabilities accelerate, not hacks that expire with the next model release. We measure success by whether millions of people gain real agency over AI's role in their lives, not just proficiency with our products.
This role sits at the intersection of product, research, and learning. You'll spend real time with research teams, sitting in on capability previews, understanding what's emerging, and translating that into what we should build next. The models will keep changing what's possible; you need to see that coming and build toward it, not react to it after the fact.
You'll own the team's core product work including both the technical and non-technical considerations for how learners interact with our content. You'll also have a hand in the broader success of the team. We're small and growing fast against demand that's grown severalfold in months. Some weeks that means shaping strategy and pushing our thinking forward. Other weeks it means chasing down reviews, aligning roadmaps, or doing the connective work that keeps everything moving. A lot of what determines whether we succeed is coordination: collaborating with stakeholders across research, GTM, comms, and legal (many of them non-technical), keeping dependencies unblocked, and making sure good work actually ships. The operational muscle to make our strategy real is just as important as the ability to develop the vision. If you're looking for a role where the scope is already neatly drawn, this isn't it. If you want to help define what this becomes, it might be.
Responsibilities
Build at the frontier
Co-develop vision and own strategy and execution for AI-native learning products — the adaptive systems, assessments, and in-product experiences that will define how people learn with and about frontier AI
Work closely with research teams to understand emerging capabilities and what they unlock for learners — you're translating research into product, not waiting for a spec
Prototype and experiment to validate ideas quickly, using Claude and our internal tools as building blocks
Anticipate how capability shifts change the product and build with that trajectory in mind
Stay close to learners — run user research, watch real people move from "I don't understand this" to "I could teach this," and let what you observe reshape what we build
Be a thought partner
Push the team's thinking on what education should look like as AI capabilities accelerate — help us spot what we're missing, not just execute what we've already decided
Bring product rigor to a team that's scaling fast: clear priorities, sharp tradeoffs, honest metrics that connect learning to real behavior change
Act as a bridge between research, product, GTM, marketing, comms, and the education team, translating in all directions
See the whole picture
Collaboratively define success metrics grounded in demonstrated understanding, skill progression, and lasting agency — not time-on-site or completion counts
Keep a hand in the success of adjacent team efforts, helping the overall education portfolio work as a system rather than a collection of projects
Partner across Anthropic — product teams, research, GTM, societal impacts, marketing — to keep education woven into how we ship and scale
Run the program, not just the product: drive stakeholder alignment across technical and non-technical partners, keep workstreams on track, and own the operational rhythm that turns strategy into shipped work
You may be a good fit if you have
5+ years in product management, with a track record of shipping products from zero to one and seeing them through to real impact
Technical fluency — you're comfortable with AI tools, can prototype with Claude, and hold your own in conversations with researchers and engineers
Genuine curiosity about frontier AI — you follow what's emerging, you play with new capabilities, and you have opinions about where things are heading
Comfort with ambiguity and wide scope — you make good calls with incomplete information and you're not precious about where your job ends
A track record as a force multiplier on small teams — you make the people around you more effective, not just your own roadmap
Strong written and verbal communication — you can write a crisp spec, present to leadership, and synthesize messy stakeholder input into a clear direction
Conviction that education should build agency, not dependency — you want to teach people to think with AI, and you believe that matters
Strong candidates may also have
Experience working closely with research or applied science teams, or a background that makes you fluent in how research becomes product
Background in learning products, developer education, or curriculum design
Founder experience or time at an early-stage company where scope was wide and you wore many hats
Familiarity with how people actually learn — learning science, instructional design, or strong instincts from having taught something yourself
Experience building with LLMs as core product infrastructure, not just as a feature
What this role is not
This is a hands-on IC product, project, and program management role. You'll shape strategy, push the team's thinking, and have real influence over what education at Anthropic becomes, and coordinate complex execution across multiple stakeholders and teams — but it doesn't involve people management. If you're looking to immediately lead a team, this isn't the right fit. If you want broad scope and high autonomy as a builder, it might be.The annual compensation range for this role is listed below.
For sales roles, the range provided is the role’s On Target Earnings ("OTE") range, meaning that the range includes both the sales commissions/sales bonuses target and annual base salary for the role.Annual Salary:$305,000—$460,000 USDLogistics
Minimum education: Bachelor’s degree or an equivalent combination of education, training, and/or experience
Required field of study: A field relevant to the role as demonstrated through coursework, training, or professional experience
Minimum years of experience: Years of experience required will correlate with the internal job level requirements for the position
Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.
Visa sponsorship: We do sponsor visas! However, we aren't able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.
We encourage you to apply even if you do not believe you meet every single qualification. Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you're interested in this work. We think AI systems like the ones we're building have enormous social and ethical implications. We think this makes representation even more important, and we strive to include a range of diverse perspectives on our team.Your safety matters to us. To protect yourself from potential scams, remember that Anthropic recruiters only contact you from @anthropic.com email addresses. In some cases, we may partner with vetted recruiting agencies who will identify themselves as working on behalf of Anthropic. Be cautious of emails from other domains. Legitimate Anthropic recruiters will never ask for money, fees, or banking information before your first day. If you're ever unsure about a communication, don't click any links—visit anthropic.com/careers directly for confirmed position openings.
How we're different
We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact — advancing our long-term goals of steerable, trustworthy AI — rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science. We're an extremely collaborative group, and we host frequent research discussions to ensure that we are pursuing the highest-impact work at any given time. As such, we greatly value communication skills.
The easiest way to understand our research directions is to read our recent research. This research continues many of the directions our team worked on prior to Anthropic, including: GPT-3, Circuit-Based Interpretability, Multimodal Neurons, Scaling Laws, AI & Compute, Concrete Problems in AI Safety, and Learning from Human Preferences.
Come work with us!
Anthropic is a public benefit corporation headquartered in San Francisco. We offer competitive compensation and benefits, optional equity donation matching, generous vacation and parental leave, flexible working hours, and a lovely office space in which to collaborate with colleagues. Guidance on Candidates' AI Usage: Learn about our policy for using AI in our application process