Working with and supporting the Technical Lead, in establishing new patterns standards, processes and procedures for client’s solution and data community. Specialize in data integration and data warehousing concepts to extract data from disparate sources and transform it as per business requirement and load the required tables that can be consumed downstream. Helping design and build solutions, communicating to both technical and business teams at a client and covey solutions to business requirements. Delivery Leadership:Define high level solution design options based on client requirements Creation of design standards and patterns reusable in a client’s solution Experience in rapid prototyping of potential solutions for design trade-discussions Mentoring and training of Junior members of the team Completing code reviews of team members Accurate breakdown and estimations of tasks for solution Ability to pick up and learn new technology quickly Engineering:Able to define a structured approach to problem solving Completion of data models and designs within client’s architecture and standards Understanding complex business environments and requirements and design a solution based on leading practices Ability to document design and solutions for understanding by client product owners Completion of deliverables for gaining architectural approval at client Understanding of DataOps approach to solution architecture. Solid experience in data and SQL is required Technical:Demonstrate experience in database and database development. Experience in other areas is a bonus. DataBase:SAP Hana Teradata SQL Server NoSQL (Hbase, Cassandra or Mongo DB) Cloud Based  Databases(Hive, Cosmos DB, Dynamo DB) Database Development:Experience Views, functions, stored procedures, Optimisation of queries, building indexes, OLAP / MDX Cloud:AWS AzureGCPETL:SSISIBM DataStageSAP Data ServicesInformatica or similar RedshiftGlueSagemakerProgramming:SQL (TSQL /HQL etc) Java Python Spark / Kafka / RabbitMQ UNIX & Shell Commands (Python / shell / Perl) is a plus Modelling: Data Vault (pref) Kimball (Pref) 3rd Normal Form / OLAP / MDX) Big Data Hadoop Platform (Cloudera / cloud equivalent) HiveQL /Spark / Ooozie / Impala / Pig) Optimising Big Data Streaming (NiFi / Kafka) Methodologies: Agile PMBOK DataOps / DevOps Data Acquisition:Pipeline creation, Automation and data delivery Once off, CDC, Streaming Behavioral: Excellent communication skills, both written and verbal Ability to develop & grow technical teams Objective oriented with strong client delivery focus Client focused by building strong trusting relationships with clients Focus on quality and risk Sound problem solving ability Ability to understand and comprehend complex environments and systems.  Inquisitive by nature and keen to figure out how things work  
Job Details
ID | #53530411 |
Estado | Dakota del Sur |
Ciudad | Johannesburg |
Tipo de trabajo | Full-time |
Salario | USD TBD TBD |
Fuente | Deloitte |
Showed | 2025-02-26 |
Fecha | 2025-02-26 |
Fecha tope | 2025-04-27 |
Categoría | Etcétera |
Crear un currículum vítae | |
Aplica ya |
Data Engineer - AI and Data
Dakota del Sur, Johannesburg 00000 Johannesburg USA