Vacancy caducado!
- BA/BS or Masters in Computer Science or equivalent practical experience
- At least 3+ years experience as a data engineer
- Solid Experience with Apache Hadoop / Spark platforms like Hortonworks
- Experience in deployment, maintenance, and administration tasks related to Cloud (Azure, AWS, Google Cloud Platform or Private Cloud), OpenStack, Docker, Kafka, Airflow, Nifi and Kubernetes
- Familiarity with monitoring and log management tools like Splunk, App Dynamics, App Insights, ELK
- Familiarity with networking including DNS, Virtual Networks, WAF, and VPN
- Familiarity with network and platform security strategies, algorithms, and implementation practices
- Robust object-oriented design pattern knowledge and implementation experience using Python
- Experience with API Design using Rest / Soap & OAuth 2.0
- Experience and expertise working with relational/non-relational databases and understanding of storage technologies (like MySQL, Sybase, MongoDB, InfluxDB, Cassandra or HBase)
- Experience with dev ops tools like Git, Maven, Jenkins
- Experience with Agile development concepts and related tools
- Excellent written and verbal communication skills.
- Self Starter with a passion for learning and implementing new technologies
- Experience with Machine Learning and Artificial Intelligence
- Experience with Business Intelligence Design and Architecture
- Experience with web technologies like Angular 2+ (or React/Vue), TypeScript, RxJS
Vacancy caducado!