Overview
Role involves building scalable data solutions and optimizing data pipelines.
Ideal candidate has 3+ years of experience with strong SQL and Python skills.
remotemidEnglishSQLPythonSparkETL
Locations
Requirements
Proficient in SQL and Python Experience with Spark and ETL development Knowledge of data warehousing principles Familiar with cloud platforms (GCP, AWS, Azure)
Responsibilities
Design and maintain ETL pipelines Implement scalable data architectures Collaborate with data teams Monitor and optimize data workflows Ensure data integrity and security Document technical designs