Remote Senior Data Engineer

Connecticut, Bristol

Vacancy caducado!

Job Description - Remote Senior Data EngineerWe are looking for an experienced Data Engineer who drive multiple data initiatives applying innovative architecture that can scale in the cloud. More specifically, a creative and talented individual who loves to design a scalable platform which scale at peta-byte level and extract value from both structured and unstructured real-time data. You should be a technology leader, capable of building a highly scalable and extensible Big Data platform which enables collection, storage, modeling, and analysis of massive data sets from numerous channels. The internet-scale platforms that you design and build will be a core asset in the delivering the highest quality content to over 150MM+ consumers on a monthly basis. This is an opportunity to fundamentally evolve how we deliver content and monetizes our audiences.Responsibilities:

  • Build scalable analytics solution, including data processing, storage, and serving large-scale data through batch and stream, analytics for both behavioral & ad revenue through digital & non-digital channels.
  • Harness curiosity - Change the way we think, act, and utilize our data by performing exploratory and quantitative analytics, data mining, and discovery.
  • Whiteboard solutions at the macro and granular level and debate technical solutions with data architects and Data Engineers.
  • Innovate and inspire - Think of new ways to help make our data platform more scalable, resilient and reliable and then work across our team to put your ideas into action.
  • Think at scale - Lead the transformation of a peta-byte scale batch-based processing platform to a near real-time streaming platform using technologies such as Apache Kafka, Spark and other open-source frameworks.
  • Cleary communicate design & development strategies to all stake holders, educate them as and when needed.
  • Have pride - Ensure performance isn't our weakness by implementing and refining robust data processing using Java, Scala and other database technologies such as Snowflake.
  • Continuously re-assess the cloud compute capabilities to optimize environment, infrastructure and get the best value and performance out of cloud assets.
  • Lead and coach - Mentor other software engineers by developing re-usable frameworks. Review design and code produced by other engineers.
  • Assist in the preparation of presentations and formal analysis in support of executive decision making and strategy development.
  • Build and Support - Embrace the DevOps mentality to build, deploy support applications in cloud with minimal help from other teams
Required Skills:
  • Strong knowledge of big data framework such as Hadoop, Apache Spark, No-SQL systems such as Cassandra or DynamoDB, Streaming technologies such as Apache Kafka
  • Understanding of reactive programming and dependency injection such as Spring to develop REST services
  • Hands on experience with newer technologies relevant to the data space such as Spark, Airflow, Apache Druid, Snowflake (or any other OLAP databases).
  • Cloud First - Plenty of experience with developing and deploying in a cloud native environment preferably AWS cloud.
  • Embrace Client - Work with data scientists to operationalize machine learning models and build apps to make use of power of machine learning.
  • Problem solver - Enjoy new and meaningful technology or business challenges which require you to think ahead
  • 10+ years of experience developing data driven application using mix of languages (Java, Scala, Python, SQL etc.) and open source frameworks to implement data ingest, processing, and analytics

Suscribir Reportar trabajo