Black Forest Labs

Multimodal VLM/LLM Researcher

Black Forest Labs

Overview

Researcher focused on multimodal vision-language and large language models.

Ideal candidate has expertise in training large-scale vision-language models and a strong publication record.

remotePyTorchAI ModelsLLMsgenerative AIcomputer vision

Locations

  • United States
  • United Kingdom
  • Germany

Requirements

  • Expertise in training vision-language models
  • Strong publication record or relevant project experience
  • Proficiency in PyTorch or similar frameworks
  • Experience with distributed training systems
  • Track record of scaling AI models in production

Responsibilities

  • Run development and training of multimodal models
  • Drive innovation in media generation
  • Collaborate with teams to deploy models
  • Document and share research findings
  • Evaluate emerging models for integration

Benefits

  • Competitive compensation package
  • Flexible work arrangements
  • Access to state-of-the-art computing resources
  • Collaborative, research-focused environment
  • Opportunity to work with a strong technical team