Vacancy caducado!
- Improve BlackRock's product and services suite by crafting, growing and optimizing our data and data pipeline architecture.
- You will help lead architecture on a multi-discipline, multi-region team of data scientists, engineers, and investment professionals on a corporate-wide set of client, investor, and operational problems.
- You will build and operationalize data pipelines to enable squads to deliver high quality data-driven product.
- You will be accountable for managing high-quality datasets exposed for internal and external consumption by downstream users and applications. Lead in the creation and maintenance of optimized data pipeline architectures on large and sophisticated data sets.
- Assemble large, complex data sets that meet BlackRock business requirements.
- Act as lead to Identify, design, and implement internal process improvements and relay to relevant technology organization.
- Work with partners to assist in the data-related technical issues and support their data infrastructure needs.
- Automate manual ingest processes and optimize data delivery subject to service level agreements; work with infrastructure on re-design for greater scalability.
- Keep data separated and segregated according to relevant data policies.
- Work with data scientists to develop data ready tools to support their job.
- Assist in the development of business recommendations with effective presentation of findings at multiple levels of partners using visual analytic displays of quantitative information. Communicate findings with partners as vital.
- What You'll Need:
- 3-5+ years of experience in a data engineer role with a BA or MS degree in a quantitative subject area (computer science, mathematics, statistics, data science, economics, physics, engineering or related field)
- Experience with building and optimizing 'big data' pipelines, architectures, and data sets. Familiarity with data pipeline and workflow management tools Luigi, Airflow
- Advanced working SQL knowledge and experience with relational databases.
- Experience with Hadoop, Spark, and Kafka
- Experience with Amazon AWS and Google Cloud Platforms
- Experience with stream-processing systems: Storm, Spark-Streaming
- Experience with OO or object scripting language such as Python, Scala, and Java