Job Details

ID #12095175
Estado Florida
Ciudad Templeterrace
Tipo de trabajo Contract
Salario USD Up to $60 60
Fuente Newt Global
Showed 2021-04-10
Fecha 2021-04-09
Fecha tope 2021-06-08
Categoría Etcétera
Crear un currículum vítae

Hadoop Administrator

Florida, Templeterrace, 33617 Templeterrace USA

Vacancy caducado!

• Must know Hadoop and bigdata infrastructure.• Expert in Hadoop administration with knowledge of Hortonworks/Cloudera or Mapr Bigdata management tools • Expert in developing/managing Java and Web applications. • Expert in implementing and troubleshooting hive, spark, pig, storm, Kafka, Nifi, Atlas, Kyvos, Elastic Search, Solr, Splunk, HBase applications. • Possess a strong command of software-automation production systems (Jenkins and Selenium) and code deployment tools (Puppet, Ansible, and Chef). • Working knowledge of Ruby or Python and known DevOps tools like Git and GitHub. • Working knowledge of database (Oracle/Teradata) and SQL (Structured Query Language). • General operational expertise such as good troubleshooting skills, understanding of system’s capacity, bottlenecks, basics of memory, CPU, OS, storage, and network.

ResponsibilitiesResponsible for implementation and ongoing administration of Hadoop infrastructure initiatives. • Aligning with the systems engineering team to propose and deploy new hardware and software environments required for Hadoop and to expand existing environments. • Working with data delivery teams to setup new Hadoop users. This job includes setting up Linux users, setting up Kerberos principals and testing HDFS, Hive, Pig, Spark and MapReduce access for the new users. • Cluster maintenance as well as creation and removal of nodes using Hadoop Management Admin tools like Ambari, Cloudera Manger etc. • Sound knowledge in Ranger, Nifi, Kafka, Atlas, Hive, Storm, pig, spark, Elastic Search, Splunk, Solr, Kyvos, Hbase etc and other bigdata tools. • Performance tuning of Hadoop clusters and Hadoop MapReduce routines. • Screen Hadoop cluster job performances and capacity planning • Monitor Hadoop cluster connectivity and security • Manage and review Hadoop log files, File system management and monitoring. • HDFS support and maintenance. • Diligently teaming with the infrastructure, network, database, application and business intelligence teams to guarantee high data quality and availability. • Collaborating with application teams to install operating system and Hadoop updates, patches, version upgrades when required. • Implement automation tools and frameworks (CI/CD pipelines).

Vacancy caducado!

Suscribir Reportar trabajo