Vacancy caducado!
Integral Ad Science (IAS) is a global technology and data company that builds verification, optimization, and analytics solutions for the advertising industry and were looking for a Data Engineering Intern to join our Data Engineering team. If you are excited by technology that has the power to handle hundreds of thousands of transactions per second; collect tens of billions of events each day; and evaluate thousands of data-points in real-time all while responding in just a few milliseconds, then IAS is the place for you! Our data pipelines process between 2B to 5B events per hour. Experience gained on such data pipelines are in high demand. As a Data Engineering Intern you will work with more senior engineers in building new data pipelines on AWS, migrate pipelines to AWS or extend features and optimize performance of existing data pipelines.
What youll get to do :- Implement data processing solutions using Big Data stack including but not limited to Hadoop, Spark, EMR and Snowflake
- Work with data engineers and data scientists to analyze, design, code, debug, test, document, and deploy changes to the system
- Participate in sprint meetings and daily stand-ups
- Attend intern activities designed to allow you to develop your skills, better understand your career interests and identify opportunities for future employment
- Network with other interns and IAS employees
- A rising senior actively working towards obtaining a BA/BS degree in Computer Science, Mathematics, or a related field looking for a full-time position upon graduation
- Relevant coursework, and interest, in build, test, automation, and DevOps frameworks
- Experience programming in object oriented languages such as Java, C, Scala or Python as well as proficiency in Linux
- Basic understanding of algorithms and data structures
- Ability to communicate clearly, verbally and in writing
- Effective time management skills and ability to work in a team atmosphere
- Some practical experience working on AWS or another cloud provider
- Some practical experience developing with Apache Spark and/or Hive
- Good knowledge of SQL and experience with columnar datastores
Vacancy caducado!