Vacancy caducado!
One of the largest health insurers in the nation is focused on continuously building an industry-defining, world-class IT capability. Healthcare is forever evolving, especially due to emerging technologies, making this a great experience to add to your resume. Come join their winning team! This Contract role as a Hadoop Developer in Chicago, IL will be responsible for developing, integrating, testing, and maintaining existing and new applications; knowing one or more programming languages; knowing one or more development methodologies / delivery models. Must be located in Chicago or Richardson, TX. Responsibilities:
- Provide subject matter expertise to Service Delivery and other Center teams, as required.
- Contribute to product feature prioritization and technology roadmaps.
- Design, build and unit test highly scalable applications.
- Provide maintenance support to applications as required supporting incident escalations.
- Identify new technologies, trends, and opportunities.
- Participate in sprint planning, design, coding, unit testing, sprint reviews.
- Provide basic design documents and translates into component-level designs to accelerate development. Design, develop, and distribute reusable technical components.
- Assist in developing of technical documentation; participate in test-plan development, integration, and deployment.
- Define & develop project requirements, functional specifications, and detailed designs of application solutions for clients.
- 4-6 years of experience in Big Data development using Hadoop with a good understanding in all phases of software development life cycle.
- Experience using Hadoop Technologies like Hive, Spark, Pig, or Kafka
- Participate in sprint planning, design, coding, unit testing, sprint reviews
- Provide basic design documents and translates into component-level designs to accelerate development. Designs, develops, and distributes reusable technical components.
- Assist in developing technical documentation; participate in test-plan development, integration, and deployment
- Must have extensive hands-on experience in designing, developing, and maintaining software solutions on Big Data platform such as Hadoop eco-system.
- Must have experience with strong UNIX shell scripting
- Must have experience with one of the IDE tools such as Eclipse.
- Must have working experience with Spark and Scala/Python.
- Preferred experience with developing Pig scripts/Hive QL, HBASE, SQOOP, UDF for analyzing all semi-structured/unstructured/structured data flows.
- Preferred experience with developing MapReduce programs running on the Hadoop cluster using Java/Python.
- Preferred experience with developing in Cloud environment such as Azure.
- Not mandatory but a big advantage to have prior experience using Talend with Hadoop technologies.
Vacancy caducado!