Vacancy caducado!
- Work with Architect, Machine Learning engineers and Data engineers to identify technical and functional needs of systems
- Ensure adherence to defined development life cycle, Software design practices, and Architecture strategy and intent
- Design and Develop distributed computation and parallel processing components to support high volume data pipelines
- Support DevOps and CI/CD processes
- Contribute to application frameworks in support of greater resiliency and self-healing capabilities
- Contribute to monitoring frameworks to accomplish end to end flow monitoring and noiseless alerting with proper telemetry
- Implement performance tests, identify bottlenecks, opportunities for optimization and continuous improvements
- Participate in deep design reviews with application and platform teams throughout the life cycle to help develop software for reliability, speed and scale
- Act as the coach and mentor to team members on their assigned project tasks
- Develop a cohesive software engineering team and ensure their continued success
- Conduct product work reviews with team members
- BS/BA degree or equivalent experience
- Advanced knowledge of application, data and infrastructure architecture disciplines
- 5+ Experience in a Big Data technologies (Spark, Impala, Hive, Redshift, Kafka, etc.)
- 5+ years of Experience in Java/Python/SQL Development
- 2+ years of AWS experience required
- Expertise in AWS stack designing, coding, testing, and delivering solution that supports high data volume