Job ID :
34330
Company :
Internal Postings
Location :
Dearborn, MI
Type :
Contract
Duration :
6 Months
Salary :
DOE
Status :
Active
Openings :
1
Posted :
03 Nov 2021
Job Seekers, Please send resumes to resumes@hireitpeople.com

Detailed Job Description:

  • Strong working knowledge (set-up and administration) of Hadoop platform and related tools and technologies such as Hive, Spark, HBase, Sqoop, Impala, Kafka, Flume, Oozie, MapReduce, etc.
  • Hands-on experience with Spark and scripting languages such as Scala and Python. 
  • Strong knowledge of Cloud technologies - AWS, Azure or GCP - especially in the area of Big Data.
  • Practical knowledge of end-to-end design and build process of near-real time and batch data pipelines; expertise with SQL and data modeling
  • Architect the Big Data solutions at an Enterprise level. Review customer's data architecture strategy. Ensure the architecture supports the requirements of the business process.
  • Identify problems and issues affecting multiple data related areas and facilitate their resolution.
  • Effectively anticipates and evaluates impacts of problem solutions that affect multiple areas of the organization
  • Experience in architecting and implementing Data Warehouse / Data Lake solutions.
  • Be an expert in designing and implementing Data Pipelines, ETL/ELT.
  • Good command of Data Visualization tools (Qlikview, Tableau e.t.c).
  • Follow/maintain an agile methodology for delivering on project milestones.

Minimum years of experience*: 15+