Job ID :
28063
Company :
Internal Postings
Location :
Atlanta, GA
Type :
Contract
Duration :
6 Months+
Salary :
DOE
Status :
Active
Openings :
1
Posted :
24 Aug 2020
Job Seekers, Please send resumes to resumes@hireitpeople.com
Must Have Skills:
  • Apache Kafka
  • Spark
  • Hadoop
Detailed Job Description:
  • Seeking an experienced Data Engineer Big Data individual for Atlanta.
  • Candidate must have Big Data engineering experience and must demonstrate an affinity for working with others to create successful solutions.
  • Will work on NS Big Data Platforms Cloudera.
  • Must be a very good communicator, and have some experience working with business areas to translate their business data needs and data questions into project requirements.
  • Will participate in all phases of the Data Engineering life cycle
Top 3 responsibilities you would expect the Subcon to shoulder and execute:
  1. Must have Big Data engineering experience and must demonstrate an affinity for working with others to create successful solutions
  2. Must be a very good communicator, and have some experience working with business areas to translate their business data needs and data questions into project requirements
  3. Will participate in all phases of the Data Engineering life cycle and will independently and collaboratively write project requirements, architect solutions and perform data ingestion development.

Minimum years of experience: 6+

Certifications Needed: No

Skills and Experience required:

  • 6+ years of overall IT experience
  • 3+ years of experience with high - velocity high-volume stream processing: Apache Kafka and Spark Streaming
  • Experience with real-time data processing and streaming techniques using Spark structured streaming and Kafka
  • Deep knowledge of troubleshooting and tuning Spark applications
  • 3+ years of experience with data ingestion from Message Queues (Tibco, IBM, etc.) and different file formats across different platforms like JSON, XML, CSV
  • 3+ years of experience with Big Data tools/technologies like Hadoop, Spark, Spark SQL, Kafka, Sqoop, Hive, S3, HDFS, or Cloud platforms e.g. AWS, GCP, etc.
  • 3+ years of experience building, testing, and optimizing Big Data data ingestion pipelines, architectures and data sets
  • 2+ years of experience with Python (and/or Scala) and PySpark
  • 2+ years of experience with NoSQL databases, including HBASE and/or Cassandra
  • Knowledge of Unix/Linux platform and shell scripting is a must
  • Strong analytical and problem solving skills

Preferred:

  • Experience with Cloudera/Hortonworks HDP and HDF platforms
  • Experience with NIFI, Schema Registry, NIFI Registry
  • Strong SQL skills with ability to write intermediate complexity queries
  • Strong understanding of Relational Dimensional modeling
  • Experience with GIT code versioning software
  • Experience with REST API and Web Services
  • Good business analyst and requirements gathering/writing skills

Interview Process (Is face to face required?): No

Does this position require Visa independent candidates only? No