Overview
Role involves designing and maintaining scalable data pipelines using big data technologies.
Ideal candidate should have 6+ years of experience in Big Data development with expertise in Spark and Scala.
hybridmidfull-timeApache SparkScalaAWSAzureGCPJava
Locations
India, Karnataka, Bengaluru India, Telangana, Hyderabad India, Tamil Nadu, Chennai India, Maharashtra, Mumbai
Requirements
Strong expertise in Apache Spark and Scala Hands-on experience with AWS, Azure, or GCP
Responsibilities
Develop and maintain data processing pipelines Implement cloud-based big data solutions Collaborate with teams for data requirements
Benefits
Career growth opportunities