Overview
Role involves leading the design and implementation of scalable data pipelines for AI-driven products.
Ideal candidate should have 5+ years of experience in data engineering with strong Python and SQL skills.
remoteseniorfull-timeEnglishPythonDatabricksAWSApache SparkSQL
Locations
United States, California, San Francisco
Requirements
Expertise in Python and SQL Experience with Databricks and Delta Lake Hands-on experience with AWS
Responsibilities
Design and architect data pipelines Integrate with external systems Contribute to Lakehouse architecture Monitor and optimize data workflows
Benefits
Health, dental, and vision coverage