Overview
Role involves designing and building scalable data pipelines using big data technologies.
Ideal candidate should have 4+ years of experience in data engineering with strong skills in Apache Spark and SQL.
remotemidfull-timeEnglishApache SparkSQLPythonPySparkETLApache AirflowAWSGCPAzure+ 1 more
Locations
Requirements
2+ years data engineering with Apache Spark and SQL Knowledge of PySpark and big data technologies Experience with ETL pipelines Familiarity with cloud platforms like AWS, GCP, or Azure Understanding of software development lifecycles
Responsibilities
Design and build data pipelines Collaborate with data science team Create standardized data models Troubleshoot ETL pipelines Promote software development best practices Document development updates
Benefits
Excellence Centers meetups