Overview
Role involves developing and maintaining scalable data pipelines and integrations.
Ideal candidate should have 7+ years of experience in data engineering with strong Python skills.
remoteseniorcontracttemporaryPythonAWSSQLETLSparkHadoopKafka
Locations
Requirements
4-year degree or equivalent 7 years experience in data engineering Experience with AWS technologies Experience with SQL queries Experience with data warehouse technologies Experience in Linux environment Experience with Big Data tools is a plus
Responsibilities
Develop and maintain data pipelines Build new data integrations Establish data governance processes Collaborate with analytics teams Design and develop data integrations Guide less experienced engineers Automate deployment of distributed systems