Job Seekers, Please send resumes to resumes@hireitpeople.comDetailed Job Description:
- Strong working knowledge (set - up and administration) of Hadoop platform and related tools and technologies such as Hive, Spark, HBase, Sqoop, Impala, Kafka, Flume, Oozie, MapReduce, etc.
- Hands-on experience with Spark and scripting languages such as Scala and Python.
- Strong knowledge of Cloud technologies AWS especially in the area of Big Data.
- Practical knowledge of end-to-end design and build process of near-real time and batch data pipelines; expertise with SQL and data modeling
- Architect the Big Data solutions at an Enterprise level. Review customers data architecture strategy. Ensure the architecture supports the requirements of the business process.
- Identify problems and issues affecting multiple data related areas and facilitate their resolution.
- Effectively anticipates and evaluates impacts of problem solutions that affect multiple areas of the organization
- Experience in architecting and implementing Data Warehouse / Data Lake solutions.
- Be an expert in designing and implementing Data Pipelines, ETL/ELT.
- Good command of Data Visualization tools (Qlikview, Tableau e.t.c).
- Follow/maintain an agile methodology for delivering on project milestones.
Minimum years of experience*: 10 +years
Certifications Needed: No
Interview Process (Is face to face required?): No
Does this position require Visa independent candidates only? No