Vacancy caducado!
- Feature implementations to an existing Flink based streaming data Ingestion from Kafka.
- Production Support and On-call rotation.
- Monitor the ingestion pipeline and rapid problem solving if somethings breaks.
- Collect and add metrics to existing Data Dog Dashboard.
- Monitor Slack channel and answer customer/user questions about data and latency.
- Agile, Highly collaborative in WebEx and effective communications over slack, emails, and verbal.
- 7+ years overall software development experience with at least 3+ years of experience in large scale data platforms.
- Excellent programming expertise in Object oriented programming using Java.
- Excellent problem solving and debugging skills to provide quick fixes for critical show stopping problems.
- Hands-on experience in developing Streaming pipelines (Data Ingestion) from Kafka to S3/Hive using Spark/Flink.
- Answer complex questions using data, analysis, and clearly communicate findings to engineering teams for direction.
- Hands-on experience in AWS cloud services like S3, IAM, EC2, EKS and VPN.
- Proficient in designing scheduling workflows in Apache Airflow.
- Improve the Data Pipeline monitoring system and add more relevant metrics to monitor the health of the system.
- Understanding of Docker containers and Kubernetes.
- Understanding of CI/CD automation and willing to learn new technologies.
- Java, AWS, S3, SQL, Hive, Kafka, Spark/Flink, Git, Parquet, Avro, CI/CD and Streaming.