Overview
Role involves designing and developing scalable data pipelines for processing large data sets.
Ideal candidate has 4+ years of experience in data engineering with strong skills in Spark and cloud architectures.
remotemidpermanentfull-timeEnglishSparkKafka
Locations
Requirements
4+ years experience in software/data engineering Experience with Apache Spark Proficiency in Java, Scala, or Python
Responsibilities
Design and implement scalable data pipelines Develop and optimize Databricks Spark pipelines Collaborate with data scientists and engineers Document technical designs and workflows
Benefits
Training and development allowance