Job Details

ID #19717661
Estado Delaware
Ciudad Newark
Tipo de trabajo Permanent
Fuente Bank Of America
Showed 2021-09-16
Fecha 2021-09-15
Fecha tope 2021-11-13
Categoría Ingeniería de Internet
Crear un currículum vítae

Feature Lead Technology

Delaware, Newark

Vacancy caducado!

Job Description: Position Summary Responsibilities• Lead projects end-to-end. Meet with business users to determine requirements, analyze the data lake for relevant datasets, collaborate with other developers to design a technical solution, and see the project through to completion.• Design and build analytical workflows that take data from the lake and transforms, filters, aggregates, and computes a meaningful result or report for use by a consumer application• Develop highly scalable and extensible Big Data platform, which enables collection, storage, modeling, and analysis of massive data sets from numerous channels• Continuously evaluate new technologies, innovate and deliver solution for business critical applications Required Skills • Bachelor's / Master's degree in Computer Science or equivalent experience• Minimum 8 years of Software Development experience• Minimum 5 years of experience with the Hadoop/Cloudera ecosystem, such as Spark, MapReduce, Yarn, Hive, Sqoop, Impala, Kafka and Oozie• Experience with Apache Spark• Experience with Unix / Linux and shell scripting• Experience with two or more Programming Languages (SQL, Java , Python, Scala, R)• Experience leading software projects, from the design through release phases• Experience using the Data lake to design and produce analytical output through batch and real-time processing• Strong understanding of capacity planning, software development lifecycle, and enterprise production deployments• Hands-on experience benchmarking systems, analyzing bottlenecks, and designing performant code Preferred Skills • SDLC Methodology - Agile / Scrum / Iterative Development• Job Scheduling Tools (Autosys)• Version Control System (Git, Bitbucket)• Continuous Integration / Continuous Delivery (CI/CD) pipelines (Jenkins)• Real Time Streaming (Kafka)• Visual Analytics Tools (Tableau)• No SQL Technologies (Hbase)• Data Integration, Data Security on Hadoop ecosystem. ( Kerberos )• Awareness or experience with Data Lake with Cloudera ecosystemJob Band:H5Shift:1st shift (United States of America)Hours Per Week:40Weekly Schedule:Referral Bonus Amount:0 > Job Description: Position Summary Responsibilities• Lead projects end-to-end. Meet with business users to determine requirements, analyze the data lake for relevant datasets, collaborate with other developers to design a technical solution, and see the project through to completion.• Design and build analytical workflows that take data from the lake and transforms, filters, aggregates, and computes a meaningful result or report for use by a consumer application• Develop highly scalable and extensible Big Data platform, which enables collection, storage, modeling, and analysis of massive data sets from numerous channels• Continuously evaluate new technologies, innovate and deliver solution for business critical applications Required Skills • Bachelor's / Master's degree in Computer Science or equivalent experience• Minimum 8 years of Software Development experience• Minimum 5 years of experience with the Hadoop/Cloudera ecosystem, such as Spark, MapReduce, Yarn, Hive, Sqoop, Impala, Kafka and Oozie• Experience with Apache Spark• Experience with Unix / Linux and shell scripting• Experience with two or more Programming Languages (SQL, Java , Python, Scala, R)• Experience leading software projects, from the design through release phases• Experience using the Data lake to design and produce analytical output through batch and real-time processing• Strong understanding of capacity planning, software development lifecycle, and enterprise production deployments• Hands-on experience benchmarking systems, analyzing bottlenecks, and designing performant code Preferred Skills • SDLC Methodology - Agile / Scrum / Iterative Development• Job Scheduling Tools (Autosys)• Version Control System (Git, Bitbucket)• Continuous Integration / Continuous Delivery (CI/CD) pipelines (Jenkins)• Real Time Streaming (Kafka)• Visual Analytics Tools (Tableau)• No SQL Technologies (Hbase)• Data Integration, Data Security on Hadoop ecosystem. ( Kerberos )• Awareness or experience with Data Lake with Cloudera ecosystemJob Band:H5Shift:1st shift (United States of America)Hours Per Week:40Weekly Schedule:Referral Bonus Amount:0Job Description: Position Summary Responsibilities• Lead projects end-to-end. Meet with business users to determine requirements, analyze the data lake for relevant datasets, collaborate with other developers to design a technical solution, and see the project through to completion.• Design and build analytical workflows that take data from the lake and transforms, filters, aggregates, and computes a meaningful result or report for use by a consumer application• Develop highly scalable and extensible Big Data platform, which enables collection, storage, modeling, and analysis of massive data sets from numerous channels• Continuously evaluate new technologies, innovate and deliver solution for business critical applications Required Skills • Bachelor's / Master's degree in Computer Science or equivalent experience• Minimum 8 years of Software Development experience• Minimum 5 years of experience with the Hadoop/Cloudera ecosystem, such as Spark, MapReduce, Yarn, Hive, Sqoop, Impala, Kafka and Oozie• Experience with Apache Spark• Experience with Unix / Linux and shell scripting• Experience with two or more Programming Languages (SQL, Java , Python, Scala, R)• Experience leading software projects, from the design through release phases• Experience using the Data lake to design and produce analytical output through batch and real-time processing• Strong understanding of capacity planning, software development lifecycle, and enterprise production deployments• Hands-on experience benchmarking systems, analyzing bottlenecks, and designing performant code Preferred Skills • SDLC Methodology - Agile / Scrum / Iterative Development• Job Scheduling Tools (Autosys)• Version Control System (Git, Bitbucket)• Continuous Integration / Continuous Delivery (CI/CD) pipelines (Jenkins)• Real Time Streaming (Kafka)• Visual Analytics Tools (Tableau)• No SQL Technologies (Hbase)• Data Integration, Data Security on Hadoop ecosystem. ( Kerberos )• Awareness or experience with Data Lake with Cloudera ecosystemShift:1st shift (United States of America)Hours Per Week:40Learn more about this role

Suscribir Reportar trabajo