Applied AI Engineer, Inference at Weights & Biases

Back to jobs
Weights & Biases

Applied AI Engineer, Inference

12h ago
Location
Bellevue, Washington, US
Type
On-site · Full-time
Compensation
$165k – 242k/yr
Skills
PythonLlm InferenceModel ServingBenchmarkingLatency OptimizationThroughput OptimizationProfilingPerformance Engineering+16
CoreWeave, the AI Hyperscaler™, acquired Weights & Biases to create the most powerful end-to-end platform to develop, deploy, and iterate AI faster. Since 2017, CoreWeave has operated a growing footprint of data centers covering every region of the US and across Europe, and was ranked as one of the TIME100 most influential companies of 2024. By bringing together CoreWeave’s industry-leading cloud infrastructure with the best-in-class tools AI practitioners know and love from Weights & Biases, we’re setting a new standard for how AI is built, trained, and scaled. The integration of our teams and technologies is accelerating our shared mission: to empower developers with the tools and infrastructure they need to push the boundaries of what AI can do. From experiment tracking and model optimization to high-performance training clusters, agent building, and inference at scale, we’re combining forces to serve the full AI lifecycle — all in one seamless platform. Weights & Biases has long been trusted by over 1,500 organizations — including AstraZeneca, Canva, Cohere, OpenAI, Meta, Snowflake, Square,Toyota, and Wayve — to build better models, AI agents and applications. Now, as part of CoreWeave, that impact is amplified across a broader ecosystem of AI innovators, researchers, and enterprises. As we unite under one vision, we’re looking for bold thinkers and agile builders who are excited to shape the future of AI alongside us. If you're passionate about solving complex problems at the intersection of software, hardware, and AI, there's never been a more exciting time to join our team. What You'll Do Description Of The Team The Inference team is responsible for delivering high-performance model serving capabilities that meet the needs of real production workloads. We work at the intersection of model behavior, serving systems, hardware, and customer requirements to improve throughput, latency, reliability, and quality across our inference stack. About The Role We are looking for an Applied AI Engineer to help us understand, measure, and improve the real-world performance of our inference platform. In the near term, this role will focus on building and running rigorous benchmarks, profiling model and system behavior, identifying bottlenecks, and driving targeted optimizations for both platform-wide and customer-specific workloads. This role is intentionally scoped around applied performance work in support of the Inference organization. Initial responsibilities center on benchmarking, optimization, and workload-driven research rather than broad ownership of frontier model research agendas. Over time, the scope of the role is expected to broaden as the team and product mature. • Build and maintain benchmarking workflows that measure latency, throughput, quality regressions, and cost across priority models and serving configurations. • Benchmark our inference stack against realistic customer workloads and external provider baselines to identify performance gaps and improvement opportunities. • Profile model-serving behavior across frameworks, runtimes, and hardware configurations to find bottlenecks in prefill, decode, KV cache usage, batching, graph capture, quantization, and related systems. • Drive targeted optimization efforts for specific customer and product workloads, including tuning serving configurations, evaluating runtime features, and validating changes against representative traces and benchmarks. • Design and run experiments on model-serving techniques such as quantization, speculative decoding, caching strategies, routing, and other inference optimizations, with careful attention to quality and correctness tradeoffs. • Partner closely with inference platform engineers to productionize improvements and establish repeatable workflows for performance testing and regression detection. • Produce clear technical writeups and recommendations that help the team make better decisions about model configurations, runtime choices, hardware allocation, and customer-specific deployment strategies. • Contribute additional applied research over time as needed to support inference quality, optimization, and product performance goals. Who You Are • 4+ years of experience in machine learning, systems, performance engineering, or adjacent applied engineering work. • Strong programming skills in Python and comfort working in production engineering environments. • Experience running empirical evaluations, benchmarks, or experiments and translating results into concrete engineering decisions. • Familiarity with LLM inference systems and tools such as vLLM, SGLang, TensorRT-LLM, or similar model-serving stacks. • Understanding of the practical tradeoffs involved in latency, throughput, batching, GPU utilization, quantization, and quality regression analysis. • Ability to work across model, systems, and product boundaries and stay focused on outcomes that matter for customers. • Strong written communication and a bias toward making technical work legible and reproducible for others. Preferred • Experience optimizing inference workloads on modern GPU hardware. • Experience with profiling tools such as Nsight Systems, PyTorch profilers, or custom telemetry pipelines. • Familiarity with benchmark suites and evaluation frameworks for coding, reasoning, or agent workloads. • Experience using real production traces or customer traffic patterns to guide optimization work. • Experience balancing model quality and serving performance when evaluating quantization, speculative decoding, or other acceleration strategies. Wondering If You're A Good Fit? We believe in investing in our people, and value candidates who can bring their own diversified experiences to our teams - even if you aren't a 100% skill or experience match. Here are a few qualities we’ve found compatible with our team. If some of this describes you, we’d love to talk. • You love turning ambiguous performance problems into concrete measurements and engineering plans. • You’re curious about how model behavior, systems architecture, and hardware choices interact in production inference workloads. • You’re an expert in running disciplined experiments and using data to drive practical performance improvements. Why Us? About We work hard, have fun, and move fast! We’re in an exciting stage of hyper-growth that you will not want to miss out on. We’re not afraid of a little chaos, and we’re constantly learning. Our team cares deeply about how we build our product and how we work together, which is represented through our core values: • Be Curious at Your Core • Act Like an Owner • Empower Employees • Deliver Best-in-Class Client Experiences • Achieve More Together We support and encourage an entrepreneurial outlook and independent thinking. We foster an environment that encourages collaboration and provides the opportunity to develop innovative solutions to complex problems. As we get set for takeoff, the growth opportunities within the organization are constantly expanding. You will be surrounded by some of the best talent in the industry, who will want to learn from you, too. Come join us! The base salary range for this role is $165,000 to $242,000. The starting salary will be determined based on job-related knowledge, skills, experience, and market location. We strive for both market alignment and internal equity when determining compensation. In addition to base salary, our total rewards package includes a discretionary bonus, equity awards, and a comprehensive benefits program (all based on eligibility). What We Offer The range we’ve posted represents the typical compensation range for this role. To determine actual compensation, we review the market rate for each candidate which can include a variety of factors. These include qualifications, experience, interview performance, and location. In addition to a competitive salary, we offer a variety of benefits to support your needs, including: • Medical, dental, and vision insurance - 100% paid for by CoreWeave • Company-paid Life Insurance • Voluntary supplemental life insurance • Short and long-term disability insurance • Flexible Spending Account • Health Savings Account • Tuition Reimbursement • Ability to Participate in Employee Stock Purchase Program (ESPP) • Mental Wellness Benefits through Spring Health • Family-Forming support provided by Carrot • Paid Parental Leave • Flexible, full-service childcare support with Kinside • 401(k) with a generous employer match • Flexible PTO • Catered lunch each day in our office and data center locations • A casual work environment • A work culture focused on innovative disruption Our Workplace While we prioritize a hybrid work environment, remote work may be considered for candidates located more than 30 miles from an office, based on role requirements for specialized skill sets. New hires will be invited to attend onboarding at one of our hubs within their first month. Teams also gather quarterly to support collaboration California Consumer Privacy Act - California applicants only CoreWeave is an equal opportunity employer, committed to fostering an inclusive and supportive workplace. All qualified applicants and candidates will receive consideration for employment without regard to race, color, religion, sex, disability, age, sexual orientation, gender identity, national origin, veteran status, or genetic information. As part of this commitment and consistent with the Americans with Disabilities Act (ADA), CoreWeave will ensure that qualified applicants and candidates with disabilities are provided reasonable accommodations for the hiring process, unless such accommodation would cause an undue hardship. If reasonable accommodation is needed, please contact: careers@coreweave.com. Export Control Compliance This position requires access to export controlled information. To conform to U.S. Government export regulations applicable to that information, applicant must either be (A) a U.S. person, defined as a (i) U.S. citizen or national, (ii) U.S. lawful permanent resident (green card holder), (iii) refugee under 8 U.S.C. • 1157, or (iv) asylee under 8 U.S.C. • 1158, (B) eligible to access the export controlled information without a required export authorization, or (C) eligible and reasonably likely to obtain the required export authorization from the applicable U.S. government agency. CoreWeave may, for legitimate business reasons, decline to pursue any export licensing process.