Overview
Role focused on developing and maintaining data pipelines for a unified data platform.
Ideal candidate should have 4-8 years of experience in data engineering with proficiency in SQL and Python.
remotemidcontracttemporaryfull-timeSQLPythonSparkAirflowSnowflake
Locations
Requirements
4-8 years of experience in data engineering Proficient in SQL and Python Experience with Spark, Airflow, Snowflake, and Azure Data Lake Understanding of data modeling and data governance principles
Responsibilities
Design and maintain ETL/ELT pipelines Apply transformation and enrichment rules Collaborate with product managers and data architects Support data matching and entity resolution processes Monitor and troubleshoot data pipelines Contribute to documentation and code quality standards