Vacancy caducado!
- Design and create high performance and scalable Golang microservices
- Collaborate with DevOps engineer to integrate unit and integration tests into a CI/CD pipeline for the service
- Create monitoring dashboards and detailed alerting in all environments
- Support these services as required for bug xes, new feature additions, and data science model adjustments
- Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc.
- Collaborate with stakeholders including the executive, product, data and design teams to assist with data-related technical issues and support their data services needs
- Assist less experienced engineers on microservice-related tasks and guide their development
- Build reusable production data pipelines
- Help to manage the infrastructure and data pipelines needed to bring an ML solution to production
- Assemble large complex data sets that meet both functional and non-functional business requirements
- Build the infrastructure required for optimal extraction, transformation and loading of data from a wide variety of data sources using SQL and AWS ‘big data’ technologies.
- Strong communication and teaching skills, to guide and build up less experienced engineers
- 7+ years of Software development experience using Golang and/or Java including strong understanding of software engineering principles
- 4+ years of experience designing and developing low-latency microservices and API/gRPC contracts
- 3+ years of experience deploying and managing containerized applications, preferably using Google Cloud Platform/AWS Kubernetes Services
- Experience with microservice unit, integration, and load testing
- Experience with alerting and monitoring tools (New Relic, CloudWatch, etc.)
- Experience developing streaming capabilities in Kaa or Kinesis
- Ability to work in a Linux environment.
- 1 year of experience working with distributed data technologies (e.g. Hadoop, MapReduce, Spark, Kaa, Flink etc) for building ecient, large-scale ‘big data’ pipelines
- Experience Implement data ingestion pipelines both real time and batch using best practices
- Experience with relational SQL and NoSQL databases, including Postgres and Cassandra.
- B.S .in computer science, software engineering, computer engineering, electrical engineering, or related area of study.