hum.ai

MLOps Engineer - Supercomputing

hum.ai

Overview

Role involves managing infrastructure for large machine learning models on GPUs.

Ideal candidate has 3+ years of experience with scalable training-inference pipelines and cloud computing.

Strong preference for in-person in San Francisco, remote work possible for exceptional candidates

remotemidpermanentfull-timeEnglishPythonBashPowerShellDockerKubernetesAWSGCPAzure

Locations

  • United States, California, San Francisco

Requirements

  • Bachelor's degree or equivalent experience
  • 3+ years of relevant experience
  • Experience with scalable training-inference pipelines

Responsibilities

  • Run large-scale experiments
  • Manage infrastructure for foundation models
  • Optimize models for performance and accuracy