Research Engineer, Multimodal at Character.AI

Back to jobs
Character.AI

Research Engineer, Multimodal

17h ago
Location
Redwood City, California, US
Type
On-site · Full-time
Compensation
$225k – 400k/yr
Skills
PyTorchVideo GenerationImage GenerationAudio GenerationMultimodal ModelsDiffusion ModelsDitControlnet+22
About the Role and Team Joining us as a Research Engineer on the Multimodal team, you'll be at the forefront of building and advancing video and image generation models that bring AI characters to life in entirely new ways. Your work will directly shape how millions of users experience rich, expressive, and visually compelling AI interactions every day. The Multimodal team is responsible for training, fine-tuning, and deploying cutting-edge image, audio and video generation models that power Character.AI's visual experiences. We work across the full model lifecycle — from data pipelines and training to deployment and product integration. As a Multimodal Research Engineer, you will own and advance our video model training efforts, including joint audio-visual generation and image-to-video. You will collaborate across research, product, and infrastructure to push the boundaries of what AI-generated visuals can look and feel like at scale. What You'll Do • Lead fine-tuning and continued training of video generation models, including image-to-video and joint audio-visual generation. • Design and experiment with novel model architectures for multimodal generation, including multimodal conditioning (voice, structured text, reference images). • Leverage techniques such as LoRA, RLHF, and full-parameter fine-tuning to improve model quality across diverse visual scenarios. • Design and build large-scale data pipelines and automated annotation workflows to support continuous model improvement. • Explore model compression, inference acceleration, and serving optimizations to enable efficient real-time video processing at scale. Who You Are • Strong passion for pushing the boundaries of visual AI, with a self-driven, hands-on approach to solving complex technical problems • Proficient in PyTorch with end-to-end experience across data processing, model training, and deployment • Solid understanding of video and image generation architectures, including diffusion models, DiT, ControlNet, and SOTA video generation models • Experience with multimodal model training, including working with audio, vision, and language modalities together • Experience with distributed training tools (FSDP, DeepSpeed, etc.) • Experience with large-scale data processing, dataset construction, and automated data cleaning Nice to Have • Experience with joint audio-visual or speech-conditioned generation models • Experience with AIGC, video effects, character animation, or asset generation products • Familiarity with ML deployment and orchestration (Kubernetes, Slurm, Docker, cloud platforms) • Publications in relevant venues (NeurIPS, ICLR, CVPR, ECCV, ICCV, or similar) About Character.AI Character.AI empowers people to connect, learn and tell stories through interactive entertainment. Over 20 million people visit Character.AI every month, using our technology to supercharge their creativity and imagination. Our platform lets users engage with tens of millions of characters, enjoy unlimited conversations, and embark on infinite adventures. In just two years, we achieved unicorn status and were honored as Google Play's AI App of the Year—a testament to our innovative technology and visionary approach. Join us and be a part of establishing this new entertainment paradigm while shaping the future of Consumer AI! At Character, we value diversity and welcome applicants from all backgrounds. As an equal opportunity employer, we firmly uphold a non-discrimination policy based on race, religion, national origin, gender, sexual orientation, age, veteran status, or disability. Your unique perspectives are vital to our success.