Job Seekers, Please send resumes to resumes@hireitpeople.comMust Have Skills:
- Hands on experience with the Hadoop ecosystem (HDFS, MapReduce, HBase, Hive, Scala, Spark, Kafka, Presto)
- Build data pipelines and ETL using heterogeneous sources, You will build data ingestion from various source systems to Hadoop using Kafka, Flume, Sqoop, Spark Streaming etc.
- Strong development automation skills. Must be very comfortable with reading and writing Scala, Python or Java code.
- Experience in other open sources like Druid, Elastic Search, Logstash, CICD and cloud based deployments is a plus
- Ability to research and assess open source technologies and components to recommend and integrate into the design and implementation
Detailed Job Description:
7 years of experience with the Hadoop ecosystem and Big Data technologies. Hands on experience with the Hadoop ecosystem HDFS, MapReduce, HBase, Hive, Scala, Spark, Kafka, Presto. Experience in Scala is a must. Experience with building stream processing systems using solutions such as spark streaming. Experience in other open sources like Druid, Elastic Search, Logstash, CICD and cloud based deployments is a plus. Ability to dynamically adapt to conventional bigdata frameworks and tools with the use cases require
Minimum years of experience: 8 - 10 years
Certifications Needed: No
Top 3 responsibilities you would expect the Subcon to shoulder and execute:
- Ensuring Data Delivery for EAP 2.0 Services for streaming, daily, weekly, monthly and quarterly loads, Support for jobs and data freshness monitoring Prod, DR and ITG env services
- Analyze the incident root cause, fix the data issues, no code fixes to ensure data delivery, Housekeeping, Code Fix for Incidents Non ER
- Problem Records, Class issues reduce the repetitive errors, EAP logic issues, Work on Continuous Service Improvement areas, Performance tuning
Interview Process (Is face to face required?) No
Does this position require Visa independent candidates only? No