Overview
Role involves leading, designing, and building data pipelines and systems for large datasets.
Ideal candidate should have 15-20 years of experience with strong proficiency in data engineering technologies.
remoteseniorEnglishSparkAirflowBigQuerydbtHadoop
Locations
Requirements
Bachelor's degree required 15-20 years of experience Proficiency in Java, Golang, Scala, or Python Experience with big data processing systems Experience with cloud technology stacks Experience with data warehousing architecture Previous startup experience preferred
Responsibilities
Develop and implement data pipelines Process large volumes of data Collaborate with data scientists Maintain and optimize existing data systems Stay updated with data engineering advances