We provide IT Staff Augmentation Services!

Technical Project Lead Resume

0/5 (Submit Your Rating)

Lyndhurst, NJ

SUMMARY

  • Over 10 years of professional IT experience that includes 3 years of BigData/Hadoop and 6 years of web and Windows application development with .net and Java.
  • Hands on experience with the Hadoop stack (MapReduce, HDFS, Sqoop, Pig, Hive, HBase,, SPARK, Kafka,Control - M, Oozie, ZooKeeper and Talend).
  • Have experience in configuring and administrating the Hadoop Cluster using major Hadoop Distributions like Hortonworks and Cloudera.
  • Proven Expertise in performing analytics on Big Data using Map Reduce, Hive, Pig and Talend.
  • Experienced with performing real time analytics on NoSQL databases like HBase and Cassandra.
  • Experienced with ETL to load data into Hadoop/NoSQL.
  • Experienced with Dimensional modeling, Data migration, Data Masking, Data cleansing, Data profiling, and ETL Processes features for data warehouses.
  • Experience in integration of various data sources like SQL Server, Oracle, Vertica, MySQL, Flat files.
  • Worked with Oozie workflow engine and Control-M to schedule time based jobs to perform multiple actions.
  • Experienced in importing and exporting data between RDBMS into HDFS using Sqoop.
  • Analysed large amounts of data writing Pig scripts and Hive queries.
  • Logical implementation and interaction with HBase.
  • Experienced in writing MapReduce programs & UDFs for both Hive & Pig in Java.
  • Used Kafka to stream data from different sources to HDFS.
  • Experience with configuration of Hadoop Ecosystem components: Hive, HBase, Pig, Sqoop, Zookeeper..
  • Experience with configuration and administration of Talend.
  • Good experience in Hive partitioning, bucketing and perform different types of joins on Hive tables and implementing Hive SerDe like JSON and Avro.
  • Supported MapReduce Programs running on the cluster and wrote custom MapReduce Scripts for Data Processing in Java.
  • Experience with Testing MapReduce programs using JUnit .
  • Implemented several Web based, Enterprise level applications using Java Script and C# .Net.
  • Experience with web-based UI development using JQuery, ASP .NET, C# .NET CSS, HTML5.
  • Experienced with implementing/consumed SOAP Web Services.
  • Experienced in writing complex SQL Queries and SQL Tuning, writing PL/SQL blocks like stored Procedures, Functions, Cursors, Index, Triggers and packages.
  • Experienced in all facets of Software Development Life Cycle (Analysis, Design, Development, Testing and maintenance) using Waterfall and Agile methodologies.
  • Experience with various version control systems such as CVS, TFS, SVN.
  • Motivated team player with excellent communication, interpersonal, analytical, problem solving skills and zeal to learn new technologies.
  • Highly adept at promptly and thoroughly mastering new technologies with a keen awareness of new industry developments and the evolution of next generation programming solutions.

TECHNICAL SKILLS

Hadoop Ecosystem: HDFS, MapReduce, Hive, Pig, HBase, Sqoop, Flume, Oozie, ZooKeeper, Kafka, Talend

Languages: Java, SQL, PL/SQL, PIG Latin, HiveQL, Unix shell scripting

Web Technologies: Java, C#, ASP, VB

Frameworks: MVC, MVP, MVVM, NHibernate, JUnit, NUnit

No SQL Databases: HBase, Cassandra

Databases: Oracle 11g/10g/9i, My SQL, MS SQL Server, Vertica

Application Server: Apache Tomcat, IIS

Web Services: WSDL, SOAP, WCF, REST

Methodologies: Scrum, Waterfall

PROFESSIONAL EXPERIENCE

Confidential, Burlington MA

Sr Hadoop Developer

Responsibilities:

  • Worked as a part R&D team in Zeta, experimenting with emerging technologies.
  • Involved in Installing, Configuring Hadoop Eco System, Cloudera Manager using CDH5.4 and Horton works Distribution.
  • Worked on POC to upgrade the tradition relational data process to Hadoop.
  • Used Talend as ETL tool with Hadoop and other database components.
  • Used Control-M was workflow job scheduler for all the bigdata Talend jobs.
  • Used Sqoop to import data from Vertica and Oracle.
  • Worked extensively on Talend, Pig and Hive for ETL processing.
  • Responsible for importing log files from various sources into HDFS using Kafka.
  • Used Hbase and Cassandra for data store for Microstrategy and Jasper IReport BI reporting.
  • Did Experiment with HAWQ to speed up the operation between microstrategy and Hbase.
  • Optimizing the Hive queries using Partitioning and Bucketing techniques, for controlling the data.
  • Developed Unit test cases using Junit testing frameworks.
  • Used Importtsv to create dynamin columns in Hbase.
  • Experienced in Monitoring Cluster using Cloudera Manager and Ambari .

Environment: Hadoop, HDFS, HBase, Talend, MapReduce, Kafka, Java, Hive, Pig, Sqoop, Oozie, SQL, ETL, Cloudera Manager, Ambari, MySQL, Oracle, Vertica.

Confidential, Boston MA

Sr Hadoop Developer

Responsibilities:

  • Responsible to manage data coming from different sources and involved in HDFS maintenance and loading of structured and unstructured data.
  • Integrated Quartz scheduler with Oozie work flows to get data from multiple data sources parallel using fork.
  • Processed Multiple Data sources input to same Reducer using Generic Writable and Multi Input format.
  • Created Data Pipeline of Map Reduce programs using Chained Mappers.
  • Visualize the HDFS data to customer using BI tool with the help of Hive ODBC Driver.
  • Worked on Big Data processing of clinical and non-clinical data using MapReduce.
  • Implemented complex MapReduce programs to perform joins on the Map side using Distributed Cache in Java.
  • Responsible for importing log files from various sources into HDFS using Flume.
  • Created customized BI tool for manager team that perform Query analytics using HiveQL.
  • Used Hive and Pig to generate BI reports.
  • Imported data using Sqoop to load data from MySQL to HDFS on regular basis.
  • Created Partitions, Buckets based on State to further process using Bucket based Hive joins.
  • Created Hive Generic UDF's to process business logic that varies based on policy.
  • Moved Relational Data base data using Sqoop into Hive Dynamic partition tables using staging tables.
  • Optimizing the Hive queries using Partitioning and Bucketing techniques, for controlling the data.
  • Worked on custom Pig Loaders and storage classes to work with variety of data formats such as JSON and XML file formats.
  • Used Oozie workflow engine to manage interdependent Hadoop jobs and to automate several types of Hadoop jobs such as Java map-reduce Hive, Pig, and Sqoop.
  • Developed Unit test cases using Junit testing frameworks.
  • Experienced in Monitoring Cluster using Cloudera Manager.

Environment: Hadoop, HDFS, HBase, MongoDb, Spark, MapReduce, Kafka, Teradata, Java, Hive, Pig, Sqoop, Flume, Oozie, Hue, SQL, ETL, Cloudera Manager, MySQL.

Confidential - Lyndhurst, NJ

Technical Project Lead

Responsibilities:

  • Worked on importing data from various sources and performed transformations using MapReduce, Hive to load data into HDFS.
  • Involved in complete implementation lifecycle, specialized in writing custom MapReduce, Pig and Hive programs.
  • Handled importing of data from RDBMS into HDFS using Sqoop.
  • Managing data flow into Pivotal HAWQ (Internal / External tables).
  • Experienced in data cleansing processing using Pig Latin operations and UDFs.
  • Experienced in writing Hive Queries for analyzing data in Hive warehouse using Hive Query Language.
  • Involved in creating Hive tables, loading with data and writing hive queries to process the data.
  • Created scripts to automate the process of Data Ingestion.
  • Developed PIG scripts for source data validation and transformation.
  • Performed various performance optimizations like using distributed cache for small datasets, Partitioning, Bucketing in Hive and Map Side Joins.
  • Managing and scheduling jobs to remove the duplicate log data files in HDFS using Oozie.
  • Extensively used Hive/HQL or Hive queries to query or search for particular string in Hive tables in HDFS.
  • Experience in developing customized UDF’s in java to extend Hive and Pig Latin functionality.
  • Created HBase tables to store various data formats for data coming from different portfolios.

Environment: Horton works, Map Reduce, HBase, HDFS, Hive, Pig, Java, SQL, Cloudera Manager, Sqoop, Flume, Oozie.

Confidential, Midland Michigan

Senior Software Engineer

Responsibilities:

  • Worked on WEB and Windows Application development.
  • Worked on the application which talks to different technological platforms such as Share point, SAP, Oracle and SQL.
  • Worked with BI teams in generating the reports and designing ETL workflows on SAP BI.
  • Worked on 3-Tier architecture of application development.
  • Used OOPS concepts in application development.
  • Worked as co-Ordinator between On-shore and Off-shore teams.
  • Worked closely with business and gather the requirements.
  • Handling team of 12 people.

Environment: Microsoft Visual Studio, Web Services, UML, MVC, NHibernate, AJAX, JavaScript, NUnit, PL/SQL, Oracle 10G, SVN, TFS

Confidential

Software Engineer

Responsibilities:

  • Used Rational Rose for Use Case Diagram, Class Diagrams, Sequence diagrams and Object diagrams in design phase.
  • Involved in creation of UML diagrams like Class, Activity, and Sequence Diagrams using modeling tools of IBM Rational Rose.
  • Involved in the full life cycle development of the modules for the project.
  • Used Eclipse and Visual Studio IDE for application development.
  • Used Spring and .Net framework for dependency injection.
  • Worked with back end database such as Oracle and MS SQL.
  • Developed Web application and services using C# and ASP .net.
  • Used Struts (MVC) for developing presentation layer.
  • Used IIS application server for deploying applications.
  • Used SOAP XML, WCF Web services for transferring data between different applications.
  • Used MVC design pattern for designing application.
  • Persistence layer was implemented using Hibernate Framework. Integrated Hibernate with Spring framework.
  • Worked with complex SQL queries, SQL Joins and Stored Procedures using TOAD for data retrieval and update.
  • Used JUnit and NUnit for performing Unit Testing.
  • Used Log4J to capture the logs that included runtime exceptions.

Environment: Eclipse,Microsoft Visual Studio, Web Services, UML, MVC, NHibernate, JSP, WSDL, JMS, AJAX, JavaScript, Junit,NUnit, PL/SQL, Oracle 10G, SVN, TFS

We'd love your feedback!