Vacancy caducado!
- Bachelor’s degree in computer science or related field is preferred.
- 10 or more years’ experience in the area of Software Development and Architecture, Business Intelligence, Data Warehousing.
- Demonstrated understanding of concepts, best practices and functions of a data warehouse in the corporate environment, including data discovery, data cleansing, dimensional modeling/data warehousing and relational databases.
- Strong communication and team building skills.
- Expert level in one or more high level programming languages (Java, C#, Python, etc.).
- Expert level SQL and JSON handling. Experience working with binary columnar data formats, e.g. Parquet.
- Orchestration: AWS EC2/ECS/Lambda/Step Functions/EKS/EMR, Kubernetes, docker, CloudFormation, CDK.
- Database systems: Snowflake, DynamoDB, BigQuery, SQL Server.
- Data Processing: Spark, Spark Streaming, Kinesis, Airflow.
- Applicable AWS certification such as Big Data Specialty, Architect, or Developer preferred.
- Expertise in Data Lakes and Data Warehouses required. Lakehouse architectures a big plus.
- Experience in large scale data processing with Spark, EMR or Glue.
- Experience automating 3rd party CLI tools, shell scripting, and integrating with APIs for data acquisition.
- Expertise with strong software development practices including CI/CD, automated testing, source control, code reviews.
- Design, develop, and implement the cloud warehouse pipeline for optimal extraction, transformation, and loading of data from a wide variety of data sources using a broad array of AWS technologies.
- Assemble large, complex data sets that meet functional business requirements and maximize the strategic value of the data.
- Collaborate with business users to understand their analytic objectives and business needs, and translate the objectives into technical specifications and data warehouse solutions.
- Analyze data from multiple data sources and develop processes to integrate the data into a single but consistent view.
- Discover, recommend and implement ways to improve/ensure data reliability, efficiency, and quality.
- Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc.
- Lead and perform technical work activities including logical and physical data modeling, data flows, code development, stored procedures, performance monitoring and tuning, problem support, and technical troubleshooting.
- Coordinate the efforts of many technical resources to ensure the solution is implemented timely and precisely.
- Provide technical guidance and mentorship to other engineers on the team.
Vacancy caducado!