Research Engineer (Safety And Alignment), Post-Training

Character AI Menlo Park , CA 94026

Posted 3 weeks ago

Joining us as a Safety and Alignment Research Engineer on the Post-Training team, you'll be building tools to align our models and making sure they meet the highest standards of safety in the real world.

As increasingly powerful AI models get deployed, building tools to align and steer them becomes increasingly important. Your work will directly contribute to our groundbreaking advancements in AI, helping shape an era where technology is not just a tool, but a companion in our daily lives. At Character.AI, your talent, creativity, and expertise will not just be valued-they will be the catalyst for change in an AI-driven future.

About the role

The Post-Training team is responsible for developing our powerful pretrained language models into intelligent, engaging, and aligned products.

As a Post-Training Researcher focused on Safety and Alignment, you will work across teams and our technical stack to improve our model performance. You will get to shape the conversational experience enjoyed by millions of users per day. This will involve close partnership with our Policy, Research, and Data teams and deploying your changes directly to the product.

Example projects:

  • Develop and apply preference alignment algorithms to guide model generations.

  • Train classifiers to identify model failure modes and adversarial usage.

  • Work with annotators and red-teamers to produce useful datasets for alignment.

  • Invent new techniques for guiding model behavior.

Job Requirements

  • Write clear and clean production-facing and training code

  • Experience working with GPUs (training, serving, debugging)

  • Experience with data pipelines and data infrastructure

  • Strong understanding of modern machine learning techniques (reinforcement learning, transformers, etc)

  • Track-record of exceptional research or creative applied ML projects

Nice to Have

  • Experience developing safety systems for UGC/consumer content platforms

  • Experience working on LLM alignment

  • Publications in relevant academic journals or conferences in the fields of machine learning or recommendation systems

About Character.AI

Founded in 2021 by AI pioneers Noam Shazeer and Daniel De Freitas, Character is a leading AI company offering personalized experiences through customizable AI 'Characters.' As one of the most widely used AI platforms worldwide, Character enables users to interact with AI tailored to their unique needs and preferences.

Noam co-invented core LLM tech and was recently honored as one of TIME's 100 Most Influential in AI. Daniel created LaMDA, the breakthrough conversational AI now powering Google's Bard.

In just two years, we achieved unicorn status and were named Google Play's AI App of the Year - a testament to our groundbreaking technology and vision.

Ready to shape the future of AGI?

At Character, we value diversity and welcome applicants from all backgrounds. As an equal opportunity employer, we firmly uphold a non-discrimination policy based on race, religion, national origin, gender, sexual orientation, age, veteran status, or disability. Your unique perspectives are vital to our success.


icon no score

See how you match
to the job

Find your dream job anywhere
with the LiveCareer app.
Mobile App Icon
Download the
LiveCareer app and find
your dream job anywhere
App Store Icon Google Play Icon
lc_ad

Boost your job search productivity with our
free Chrome Extension!

lc_apply_tool GET EXTENSION

Similar Jobs

Want to see jobs matched to your resume? Upload One Now! Remove

Research Engineer (Safety And Alignment), Post-Training

Character AI