Job Seekers, Please send resumes to resumes@hireitpeople.comMust Have Skills:
- Perform Big Data Administration and Engineering activities on multiple Hadoop, Kafka, Hbase and Spark clusters
- Very good understanding on Big Data ecosystem
- Hands on in Unix scripting
- Overall 7 years of experience and at least 5 year of experience in Hadoop Admin.
- Person will be responsible to Perform Big Data Administration and Engineering activities on multiple Hadoop, Kafka, Hbase and Spark clusters
- Work on Performance Tuning and Increase Operational efficiency on a continuous basis
- Monitor health of the platforms, Generate Performance Reports and Monitor and provide continuous improvements
- Working closely with development, engineering and operation teams, jointly work on key deliverables ensuring production scalability and stability
- Develop and enhance platform best practices
- Ensure the Hadoop platform can effectively meet performance & SLA requirements
- Responsible for Big Data Production environment which includes Hadoop( HDFS and YARN), Hive, Spark, Livy, SOLR, Oozie, Kafka, Airflow,Nifi, Hbase etc .
- Perform optimization, debugging and capacity planning of a Big Data cluster
- Perform security remediation, automation and self heal as per the requirement
- Require experience with Cluster Design, Configuration, Installation, Patching, Upgrading and support on High Availability.
- Experience in monitoring tools like Nagios, Ganglia, Chef, Puppet and others
- Should have experience with Hadoop Cluster Security implementation such as Kerberose, Knox & Sentry
Minimum years of experience: 5 - 8 years
Certifications Needed: No
Top 3 responsibilities you would expect the Subcon to shoulder and execute:
- Ability to do understand requirement, do technical design coding testing and implementation.
- Coordinate with business and technical stakeholders.
- Coordinate with offshore team.
Interview Process (Is face to face required?) No
Does this position require Visa independent candidates only? No