Job Information
- Language: Python
- Location: San Fran
- Company Description: Market Making Firm
- Job Id: 1131
This role requires a minimum of 2 years of experience with either Hadoop or Spark. The Salary is highly competitive with 80% guaranteed bonus’
My client is looking for an engineer who enjoys finding ways to extract and display meaningful information from large very data sets. You enjoy working with low-latency, high throughput systems. You are interested in using open-source technologies, but if it doesn’t exist, you’re happy to build it and contribute back to the open source community.
The Role
The project is to extract meaningful insight from large data sets. They are building a system from scratch to explore the latency of market data delivery on the client’s global network for client base. You’ll be involved from the beginning!!! designing visualizations that help both developers and business departments understand how data flows through their system and where they can improve.
Responsibilities:
- Design and implement distributed data analytics systems using Hadoop/Spark, Python
- Manage cloud resources in order to maintain resiliency and performance
- Effectively roll out new features using an Agile methodology
- Participate with the rest of the team in analyzing the latency data, finding bottlenecks and proposing solutions
You need to have:
- 2+ years experience working with Hadoop and Spark
- 2+ years experience with Amazon EC2 (or equivalent)
- 4+ years experience programming in Python
- Familiarity with Linux
- A solid understanding of basic statistics and core computer science conceptions