Vacancy caducado!
- Design & recommend the best approach suited for data movement to/from different sources using Apache/Confluent Kafka
- Good understanding of Event-based architecture, messaging frameworks, and stream processing solutions using the Kafka Messaging framework
- Hands-on experience working on Kafka connect using schema registry in a high-volume environment.
- Strong knowledge an exposure to Kafka brokers, zookeepers, KSQL, KStream, and Kafka Control center.
- Good knowledge of big data ecosystem to design and develop capabilities to deliver solutions using CI/CD pipelines.
- Skilled experience in writing and troubleshooting Python/PySpark scripts to generate extracts, cleanse, conform & deliver data for consumption
- Strong working knowledge of the AWS Data analytics eco-system such as AWS Glue, S3, Athena, SQS, etc.
- Good understanding of other AWS services such as CloudWatch monitoring, scheduling, and automation services
- Good experience working on Kafka connectors such as MQ connectors, Elastic Search connectors, JDBC connectors, File stream connectors, JMS source connectors, Tasks, Workers, converters, and Transforms.
- Working knowledge on Kafka Rest proxy and experience on custom connectors using the Kafka core concepts and API.
- Create topics, set up redundancy cluster, deploy monitoring tools, and alerts, and has good knowledge of best practices.
- Develop and ensure adherence to published system architectural decisions and development standards
- Best industry Salary
- Yearly bonus
- Medical, Dental, benefits
- 401K
- Parental benefits
- Career growth opportunity
Vacancy caducado!