Vacancy caducado!
- 3 years of experience developing Data Pipelines for Data Ingestion or Transformation using Java or Scala or Python
- 2 years’ experience in the following Big Data frameworks: File Format (Parquet, AVRO, ORC etc)
- At least 3 years of developing applications with Monitoring, Build Tools, Version Control, Unit Test, TDD, Change Management to support DevOps
- 3 years’ experience with SQL and Shell Scripting experience
- 2 years of experience with software design and must have an understanding of cross systems usage and impact
- Must have expertise in Spark, Kafka, AWS, SQL,Python, Pyspark
Vacancy caducado!