Overview
Role involves developing data pipelines and aligning data systems with business goals.
Ideal candidate should have 8+ years of experience in Data Engineering with strong proficiency in Scala and Kafka.
This is a remote role for applicants based in USA
remoteseniorfull-timeEnglishScalaKafkaAzurePySparkSQLApache SparkETL
Locations
Requirements
Bachelor's degree required 8+ years experience in Data Engineering Strong proficiency in Scala required Strong proficiency in Kafka required Experience with real-time streaming required Experience in PySpark required Strong proficiency in Python required Strong proficiency in SQL required
Responsibilities
Combine data from different sources Align data systems with business goals Harmonize raw data into consumer-friendly format Build data ingestion pipelines Experience in data wrangling
Benefits
Career development opportunities High degree of individual responsibility