Job ID :
37108
Company :
Internal Postings
Location :
Maryland Heights, MO
Type :
Contract
Duration :
12 Months
Salary :
DOE
Status :
Active
Openings :
1
Posted :
19 May 2022
Job Seekers, Please send resumes to resumes@hireitpeople.com
Must Have Skills:
  • Scala
  • Java
  • Big Data
  • PySpark
Nice to have skills:
  • Scala, Spark
  • HBase, Hive, Kafka
Detailed Job Description:
  • Completely hands - on role with specialized knowledge on implementing Big data solutions in Scala.
  • Must have a Java programming background.
  • Will be responsible for solving the critical technical problems for the team and creating standardized solution templates that the junior team member can replicate.
  • It is an Individual role directly reporting to a director & will be the technical SME for the director.
  • Only people with deep technical knowledge are suited for this role with extensive experience working on Big Data Projects using Hadoop Ecosystem tools like Scala, Spark, HBase, Hive, Kafka.
  • Responsible for design, development and implementation of Big Data Projects using Hadoop Ecosystem tools like Scala, Spark, Hive, Kafka.
  • Resolve issues regarding development, operations, implementations, and system status.
  • This role is responsible for the translation of requirements into new big data solutions, maintenance and execution of existing processes, as well as continuous improvement.
Job Responsibilities:
  • Work with teams to design and build large scale data processing pipelines, analytics sandboxes, and at-scale production delivery platforms.
  • Aid in technology selection as the business defines new features requiring expanded system capability.
  • Participate in integration strategy planning as related to acquisition of new data sources.
  • Use creative and results-driven problem solving to lead engineering efforts for product teams.
  • Educate, engage and be continuously integrated with the development and data teams to help better optimize, implement best practice and increase adoption of tooling
  • Lead, design and implement Proof of Concepts for new data tooling
  • Design, Architect, and help Maintain Enterprise solutions on the big data platform
  • lead systems implementations and detailed functional/technical system design
  • Leverage extensive knowledge of distributed systems, process flows and procedures to aid analyses and recommendations for solution offerings
  • Guide/Mentor/Assist in development of the proposed architectural solution
REQUIRED QUALIFICATIONS:
  • Ability to use a wide variety of open source technologies and cloud services
  • Proficiency with Software Development Lifecycle (SDLC)
  • Good knowledge of the programming language(s), application server, database server and/or architecture of the system being developed.
  • Solid understanding of current programming languages and employs any/all of these languages to solve the business needs of Clients internal customers.
  • Coding/scripting experience using Python, Java, Scala, shell scripts
  • Good knowledge of Windows/Linux/Solaris Operating systems and shell scripting
  • Deep understanding of data engineering concepts.
  • Experience working with Spark for data manipulation, preparation, cleansing
  • Experience in whole Hadoop ecosystem like HDFS, Hive, Yarn, Flume, Oozie, Flume, Cloudera Impala, Zookeeper, Hue, Sqoop, Kafka, Storm, Spark and Spark Streaming including Nosql database knowledge
  • Professional Strong Functional programming experience using Scala and Java
  • Strong experience in Scala (Function, Generics, implicit, Collections) or other functional languages.
  • Experience in building large scale data processing pipelines using technologies like Apache Spark, Hadoop, and Hive for both structured and unstructured data assets
  • Good debugging skills is required.
  • At least 4 years of experience in architecting and designing big data solutions and over all experience of at least 10 years

Minimum years of experience: 8-10 years

Certifications Needed: No

Top 3 responsibilities you would expect the Subcon to shoulder and execute:

  1. Work with teams to design and build large scale data processing pipelines
  2. Analytics sandboxes
  3. Atscale production delivery platforms.

Interview Process (Is face to face required?) No

Does this position require Visa independent candidates only? No