Vacancy caducado!
- Partner with business stakeholders to gather requirements and translate them into technical specifications and process documentation for IT counterparts (on-prem and offshore)
- Advanced database knowledge; creating/optimizing SQL queries, stored procedures, functions, partitioning data, indexing, and reading execution plans
- Skilled experience in writing and troubleshooting Python/PySpark scripts to generate extracts, cleanse, conform and deliver data for consumption
- Expert level of understanding and implementing ETL architecture; data profiling, process flow, metric logging and error handling
- Support continuous improvement by investigating and presenting alternatives to processes and technologies to an architectural review board
- Develop and ensure adherence to published system architectural decisions and development standards
- Lead and foster data engineers in their careers to produce higher quality solutions at a faster velocity through optimization training and code review
- Multi-task across several ongoing projects and daily duties of varying priorities as required
- Will require interaction with offshore counterparts to communicate business requirements in a technical design document
- 7+ years of development experience
- Bachelor’s degree in Computer Science, MIS or related field (industry experience substitutable)
- Expert level in data warehouse design/architecture, dimensional data modeling and ETL process development
- Advanced level development in SQL/NoSQL scripting and complex stored procedures (Snowflake, SQL Server, DynomoDB, NEO4J a plus)
- Extremely proficient in Python, PySpark, and Java
- AWS Expertise – Kinesis, Glue (Spark), EMR, S3, Lambda, and Athena
- Streaming Services – Confluent Kafka and Kinesis (or equivalent)
- Working experience with global teams is a MUST
- The position will be onsite at a customer location in Dallas, 5 days a week
Vacancy caducado!