Job Details

ID #5191238
Estado Michigan
Ciudad Grand rapids
Tipo de trabajo Contract
Salario USD $125,000 - $156,000 /yr 125000 - 156000 /yr
Fuente Stefanini
Showed 2020-10-25
Fecha 2020-10-07
Fecha tope 2020-12-05
Categoría Etcétera
Crear un currículum vítae

Data Engineer with SQL Server (Remote)

Michigan, Grand rapids, 49504 Grand rapids USA

Vacancy caducado!

Stefanini Group is looking for Data Engineer with SQL Server Remote, local candidates preferred to Grand Rapids, MI Additional key pieces: a. Experience with Epic Healthcare Data b. ETL experience using SQL Server / SSIS Client is capitalizing on the vast amount of patient data collected to make the patient medical journey a personalized experience. We are seeking data engineers to design and scale databases to support a robust analytical pipeline. The job will promote collaboration between data scientists, data architects, business analysts and clinicians to support user access to data and data infrastructure. The data engineer will play a role in integrating advanced machine learning models into production with continuous integration. The role will require a general understanding of the healthcare system to for data integration across multiple data sources. Scope of responsibilities Collaborate with data scientists and business users to build the frameworks required to integrate data pipelines and machine learning models with operations Maintain database structure and standardize definitions for business users across the company Clean and verify quality of data prior to feature engineering and advanced analytical modeling Build unit tests for continuous integration Work with data architects to build the foundational Extract/Load/Transform processes Support business users and clinicians in identifying the correct data sets and providing easy to use tools to pull data

Required skills and competencies Ability to write production-level code in one of the following languages: Python, Hive, Pig, Shell Scripting, SQL, Java or Scala Ability to structure databases in one of the following platforms: Hadoop, Spark, Oracle/Teradata Proficiency leveraging the following Big Data technologies to support downstream advanced analytical modeling: Map-Reduce, Spark, Airflow/Oozie, Kafka, Hbase, Pig, No-SQL Databases Familiarity with data architecture, modelling and security Qualifications-required 3+ years structuring databases and working with big data 5+ years writing code in relevant languages Qualifications-preferred Experience working in agile, sprint-based approach 1+ year working in healthcare or related field Familiarity with cloud computing clusters Education Bachelor's Degree in computer science, mathematics, statistics or related field with 2+ years industry experience

Vacancy caducado!

Suscribir Reportar trabajo