Research Engineer / Scientist, Interpretability
OpenAI
Overview
Researcher focused on understanding deep networks and ensuring AI safety.
Ideal candidate has a strong background in AI safety and mechanistic interpretability with 2+ years of experience.
245k usd / yearhybridmidpermanentfull-timeEnglishdeep learning
Locations
United States, California, San Francisco
Requirements
Ph.D. or research experience in computer science or related field 2+ years of research engineering experience Proficiency in Python or similar languages
Responsibilities
Develop and publish research on deep network representations Engineer infrastructure for model internals Collaborate on unique projects Guide research directions for scalability
Benefits
Medical, dental, and vision insurance