We provide IT Staff Augmentation Services!

Data Engineer (hadoop, Spark, Python, Scala & Java Developer Resume

4.00/5 (Submit Your Rating)

SUMMARY:

  • 9+ years of professional work experience in IT industry with Enterprise Application development involving
  • business Analysis, Development, Maintenance & support.
  • Analytical and results driven professional with extensive experience in managing complex IT projects and exceeding expectations. Got 12 times customer appreciations.
  • Worked on Hadoop, Hive, JAVA, python, Scala Struts web framework.
  • Have good knowledge of Core Java, J2EE application model to develop web - based solutions using Servlets, JSP.
  • Capable of processing large sets of structured, semi-structured and unstructured data.
  • Experience in writing Map Reduce jobs using Java with Eclipse
  • Good working experience using Sqoop to import data into HDFS from Relation database.
  • Experience in using Avro file format.
  • Experience in writing SQL queries, Stored procedures, Functions and Triggers by using PL/SQL.
  • Expertise in JAVA/J2EE, Oracle, My-SQL technologies. Good exposure to plan and execute all the phases of software development life cycle, which include Analysis, design, development, testing. Main responsibilities include, understanding of client requirements, develop solutions, designing and architecture of applications.
  • Experience in Agile project.
  • Strong problem solving skills, good communication and good team player.
  • Practiced in clarifying business requirements, performing gap analysis between goals and existing procedures/skill
  • sets, and designing process and system improvements to increase productivity and reduce costs.
  • Sun Certified Java Programmer (SCJP).
  • MIT Professional Education Certification: Tackling the Challenges of Big Data.
  • Research-oriented, motivated, proactive, self-starter with strong technical, analytical and interpersonal skills.
  • Attended the Cloudera Spark & Confluent Kafka trainings.

TECHNICAL SKILLS:

Programming language: Java/J2EE, Struts, JSP.

Scripting language: Python, Unix shell scripting.

Web technologies: HTML & CSS.

Frameworks: Struts, Knova, Casper.

Database & tool: Oracle, Teradata, TOAD, Teradata SQL Assistant, My SQL, SQL developer

Big data: Hadoop, Hive, Map reduce, Impala, Sqoop, CDH 5.8.

Operating system: Unix, Linux, Ubuntu, Confidential

Network protocol: TCP/IP, HTTP, HTTPS

Version control system : Git, Perforce

Other tools : Jenkins, Jira, pivotal tracker, Rally, putty, buganizer

PROFESSIONAL EXPERIENCE:

Confidential

Data Engineer (Hadoop, Spark, Python, Scala & Java Developer

Responsibilities:

  • Analyzing the requirement on the segment client & national account data.
  • Import the data from CCW to Cigna cube data warehouse.
  • Implement the Hadoop platform in the Cigna cloud environment.
  • Ingest the cube data to the Cigna Hadoop platform.

Environment: Java, Python, Scala shell scripting, Hadoop, Spark, HIVE, Sqoop, Cloudera CDH 5.8, Teradata, Eclipse, WinSCP, Git, Putty, Jenkins, Rally

Confidential

Hadoop developer & Module Lead

Responsibilities:

  • Hands on coding - Write and test the code for the Ingest automation process - Full and Incremental Loads. Design the solution and develop the program for data ingestion using - Sqoop, map reduce, Shell script & python
  • Developed various automated scripts for DI (Data Ingestion) and DL (Data Loading) using python & java map reduce.
  • Developed export framework using python, sqoop, Oracle & mysql.
  • Worked on PDF parser using Java & python.
  • Developed fully customized framework using python, shell script, Sqoop & hive.
  • Extensively worked on HDFS, HIVE, Oozie & Java.
  • Dealt huge volumes of data.

Environment: Java, Python, shell scripting, Hadoop, MapReduce, HIVE, Sqoop, Cloudera CDH 5.8, Oracle, Eclipse, WinSCP, Git, Putty, Jenkins, Jira, Rest API

Confidential

Hadoop Developer

Responsibilities:

  • Design the solution; prepare the blue print for aggregate tables. Report generation queries and Shell scripts etc.
  • Extensively worked on HDFS, HIVE, sqoop & Java.
  • Used Hive to analyze the partitioned data.
  • Extensively worked on Python and build the custom ingest framework.
  • Worked on Avro, parquet file format to ingest the data from source to target.
  • Responsible for ingest framework using sqoop and store in HDFS Designed and implemented various metrics that can statistically signify the success of the experiment.
  • Worked on Rest API using python.
  • Testing and validation - Unit testing, performance testing, Integration testing and UAT.
  • Submitted 24 CL & checked in to stash.

Environment: Java, Python, Hadoop, MapReduce, HIVE, Sqoop, Cloudera CDH 5.5.4, Perforce, Git, buganizer,

Confidential

Java Developer & Hadoop Developer

Responsibilities:

  • Implemented the Biopic features using SIFT algorithm using Java.
  • Coding and implementation map reduce program using Java.
  • Worked on Delete printer module using python.
  • Worked on HDFS & HIVE to ingest the structured data to data raw zone.
  • Written UDF using java for date and time format.
  • Worked on java gradle to create the jar file to commit in the framework.
  • Extensively worked on Python & Rest API
  • Submitted 4 CL & checked into stash

Environment: Java, Python, Hadoop, HIVE, Sqoop, Perforce, Git, buganizer, Confidential, Rest API

Confidential

Hadoop Developer

Responsibilities:

  • Developed Hive table using partition & bucketing using shell script.
  • Worked java map reduce to append the data.
  • Worked on JSP & Struts to generate the aggregate data.

Environment: Java, Hadoop, HIVE, Sqoop, HPSD.

Confidential

Team Lead

Responsibilities:

  • Worked on Casper & struts frameworks to develop the parts & turbine prizing module based on the region.
  • Worked on SQL scripts & Oracle database for application transaction logic.
  • Worked on the data model design.
  • Interact with business community and gathered requirements and incorporated identified factors into Informatica mappings.
  • Extensively worked on various transformation like lookup, update, router, Joiner, Sequence generator.
  • Load operational data from Oracle, flat files into various data mart.
  • Designed mappings and workflows with many sessions and used Informatica scheduler to schedule jobs.
  • Worked with Workflow monitor to monitor the workflows, scheduling and un-scheduling them.
  • Extensively worked on Approx to schedule the data load transfer job.

Environment: Java, HIVE, Sqoop, WinSCP, Informatica, HPSD, Approx.

Confidential

Team Member

Responsibilities:

  • Analysis of the BRD’s provided by client & worked on UML diagram.
  • Worked on design and development of various modules and sub-modules using Java, JSP & Casper & Struts framework.
  • Extensively worked on html & CSS for help portal.
  • Worked on the integration of the framework with third party framework Consona using Struts framework.
  • Worked on the search module using Java.
  • Worked on MVC model using Struts framework.
  • Apart of implementing mailing functionality for download document.

Environment: Java, Eclipse, html, JSP, Oracle, TOAD, WinSCP, Informatica, HPSD.

We'd love your feedback!