Job Seekers, Please send resumes to resumes@hireitpeople.com
Must Have Skills:
- Spark
- Python
Detailed Job Description:
- Design and implement distributed data processing pipelines using Spark, Hive, Python, and other tools and languages prevalent in the Hadoop ecosystem.
- Ability to design and implement end to end solution.
- Experience publishing RESTful APIs to enable real time data consumption using Open API specifications
- Experience with open source NOSQL technologies such as HBase, DynamoDB, Cassandra
- Familiar with Distributed Stream Processing frameworks
Minimum years of experience*: 5+
Certifications Needed: No
Top 3 responsibilities you would expect the Subcon to shoulder and execute*:
- Design and implement distributed data processing pipelines using Spark, Hive, Python, and other tools and languages prevalent in the Hadoop ecosystem. Ability to design and implement end to end solution
- Experience publishing RESTful APIs to enable real time data consumption using Open API specifications
- Experience with open source NOSQL technologies such as HBase, DynamoDB, Cassandra
Interview Process (Is face to face required?) No
Does this position require Visa independent candidates only? No