Cerebras Systems seeks a Principal Machine Learning Investigator to lead a new team focused on advancing machine learning efforts aligned with existing groups. This role addresses the challenge of scaling AI applications without managing hundreds of GPUs or TPUs by formulating strategies and building capabilities in areas like post-training reinforcement learning, dataset curation, large language model pretraining, sparsity techniques, and domains such as coding agents and generative language. The position leverages Cerebras' wafer-scale architecture to optimize training and inference processes.Cerebras Systems designs and produces the world's largest AI chip, which measures 56 times larger than standard GPUs. The company provides AI compute power equivalent to dozens of GPUs on a single chip, offering programming simplicity for large-scale machine learning applications. Customers span global corporations, national labs, and healthcare systems, including a multi-year partnership with Mayo Clinic announced in January. In August, Cerebras launched Cerebras Inference, surpassing GPU-based cloud services in speed by over 10 times for generative AI inference.The investigator will recruit and develop a team for industry research and advanced development, coordinating with Field ML, Applied ML, and Core ML teams. Daily activities include adapting novel algorithms and model architectures to the Cerebras platform, systematically training, tuning, and evaluating models using techniques like reinforcement learning for deployment quality and dataset optimization for faster training. Collaboration occurs with internal teams for hardware and software co-design, external partners such as customers and academics, and across domains like image and video processing, while handling the challenge of optimizing sparsity to improve training time and inference throughput.Work environment emphasizes a simple, non-corporate culture that respects individual beliefs and supports continuous learning and growth. Opportunities include publishing and open-sourcing cutting-edge AI research, working on one of the world's fastest AI supercomputers, and enjoying job stability with startup vitality. Professional development focuses on building breakthrough AI platforms beyond GPU constraints. The role involves no specified compensation or location details in the posting.