Vacancy caducado!
- 5+ years of experience designing and implementing batch or real-time data pipelines
- Hands-on experience on batch processing (Spark, Presto, Hive) or streaming (Flink, Beam, Spark Streaming)
- Experience in AWS and knowledge in its ecosystem. Experience in scaling and operating kubernetes.
- Excellent communication skills is a must, experience working with customers directly to explain how they would use the infrastructure to build complex data pipelines
- Proven ability to work in an agile environment, flexible to adapt to changes
- Able to work independently, research on possible solutions to unblock customer
- Programming experience in Scala, Java, or Python
- Fast learner and experience with other common big data open source technologies is a big plus
- Knowledge on machine learning (Client) is a nice-to-have
Vacancy caducado!