We provide IT Staff Augmentation Services!

Hadoop Developer Resume

3.00/5 (Submit Your Rating)

Edison, NJ

SUMMARY:

  • Overall 7+ years of professional experience in IT in Analysis, Design, Development, Testing, Documentation, Deployment, Integration, and Maintenance of web based and Client/Server applications using SQL and Big Data technologies.
  • I have experience in Application Development using Hadoop and related Big Data technologies such as HBASE, HIVE, PIG, FLUME, OOZIE, SQOOP, and ZOOKEEPER.
  • In - depth Knowledge of Data Structures, Design and Analysis of Algorithms and good understanding of Data Mining and Machine Learning techniques..
  • Excellent knowledge on Hadoop Architecture and various components such as HDFS, Job Tracker, Task Tracker, Name Node, Data Node.
  • Well versed in installation, configuration, supporting and managing of Big Data and underlying infrastructure of Hadoop Cluster.
  • Having Hadoop/Big Data related technology experience in Storage, Querying, Processing and analysis of data.
  • Proficient in design and development of Map Reduce Programs using Apache Hadoop for analyzing the big data as per the requirement.
  • Hands on experience in installing, configuring, and using Hadoop ecosystem components like HDFS, MapReduce, HBase, Zookeeper, Oozie, Hive, Sqoop, Pig, and Flume.
  • Skilled in writing Map Reduce jobs in Pig and Hive.
  • Knowledge in managing and reviewing Hadoop Log files.
  • Expertise in wide array of tools in the Big Data Stack such as Hadoop, Pig, Hive, HDFS, MapReduce, Sqoop, Spark, Kafka, Yarn, Oozie, and Zookeeper.
  • Knowledge of streaming the Data to HDFS using Flume.
  • Excellent programming skills with experience in Java, C, SQL and Python Programming.
  • In depth and extensive knowledge of analyzing data using HiveQL, Pig Latin, HBase and custom Map Reduce programs in Java.
  • Experience in tuning and troubleshooting performance issues in Hadoop cluster.
  • Worked on importing data into HBase using HBase Shell and HBase Client API.
  • Hands on experience in using Sqoop to import data into HDFS from RDBMS and vice-versa.
  • Extensive experience working on various databases and database script development using SQL and PL/SQL.
  • Hands on experience in application development using Java, RDBMS and Linux Shell Scripting.
  • Experience in writing Pig and Hive scripts and extending Hive and Pig core functionality by writing custom UDFs.
  • Knowledge in writing live Real-time Processing using Spark Streaming with Kafka.
  • Involved in HBase setup and storing data into HBase, which will be used for further analysis.
  • Good experience in Hive partitioning, bucketing and perform different types of joins on Hive tables and implementing Hive SeDre with JSON and Avro.
  • Supported Map Reduce Programs running on the cluster and wrote custom Map Reduce Scripts for Data Processing in Java.
  • Used Spark with Kafka to stream the real time data.
  • Used Spark and Scala to migrate MapReduce programs to transformations of Spark
  • Hive ODBW is connected to corresponding Hive tables for testing to generate final report using Tableau
  • Elastic search, Apache Storm and Apache Kafka are used to build platforms of data, System storage and Pipelining.

TECHNICAL SKILLS:

Big Data Technologies: Hadoop, HDFS, Hive, Map Reduce Pig, Swoop, Flume, Zookeeper, Spark.

Programming Languages: Java, Python, R, SAS, shell scripting.

Web Technologies: HTML, J2EE, CSS, JS, AJAX, JSP, DOM, XML, XSLT, XPATH.

Java Frameworks: Struts, Spring, Hibernate

Data Analytical tools: Weka, Rapid miner, Unmet, Visio, Crystal Report, Spreadsheet Modeling DB Languages SQL, PL/SQL.

Scripting Languages: Shell Scripting, Puppet, Scripting, Python, Bash, CSH, Ruby, PHP

Databases /ETL: Oracle MySQL 5.2, DB2, MS SQL server.

NoSQL Databases: Base, Cassandra, Mango DB

Operating System: Linux, Unix, Windows, IOS

PROFESSIONAL EXPERIENCE:

Hadoop Developer

Confidential, Edison NJ

Responsibilities:

  • Worked on analyzing, writing Hadoop Map Reduce jobs using Java API, Pig and Hive.
  • Responsible for building scalable distributed data solutions using Hadoop.
  • Involved in loading data from edge node to HDFS using shell scripting.
  • Created HBase tables to store variable data formats of PII data coming from different portfolios.
  • Exported the analyzed data to the relational databases using Sqoop for visualization and to generate reports for the BI team.
  • Worked with using different kind of compression techniques to save data and optimize data transfer over network using LZO, Snappy, and Bzip etc.
  • Analyze large and critical datasets using Cloudera, HDFS, HBase, MapReduce, Hive, HiveUDF, Pig, Sqoop, Zookeeper, &Spark.
  • Developed custom aggregate functions using SparkSQL and performed interactive querying.
  • Used Scoop to store the data into HBase and Hive.
  • Worked on installing cluster, commissioning & decommissioning of Data Node, Name Node high availability, capacity planning, and slots configuration.
  • Creating Hive tables, dynamic partitions, buckets for sampling, and working on them using HiveQL.
  • Used Pig to parse the data and Store in Avro format.
  • Stored the data in tabular formats using Hive tables and Hive Serdes.
  • Collecting and aggregating large amounts of log data using Apache Flume and staging data in HDFS for further analysis.
  • Worked with NoSQL databases like HBase for creating HBase tables to load large sets of semi structured data coming from various sources.
  • Implemented a script to transmit information from Oracle to HBase using Sqoop.
  • Implemented MapReduce programs to handle semi/unstructured data like XML, JSON, and sequence files for log files.
  • Fine-tuned Pig queries for better performance.
  • Involved in writing the shell scripts for exporting log files to Hadoop cluster through automated process.
  • Exported the analyzed data to the relational databases using Sqoop for visualization and to generate reports for the BI team.
  • Installed Oozie workflow engine to run multiple Hive and pig jobs.
  • Analyzed large amounts of data sets to determine optimal way to aggregate and report on it.

Environment: Hadoop, MapReduce, HDFS, Yarn, Sqoop, Oozie, Pig, Hive, HBase, Spark, Java, Eclipse, UNIX shell scripting, python, Horton works.

Hadoop Developer

Confidential, Oak Brook, IL

Responsibilities:

  • Responsible for building scalable distributed data solutions using Hadoop.
  • Installed and configured Hive, Pig, Oozie, and Sqoop on Hadoop cluster.
  • Supported Map Reduce Programs that are running on the cluster.
  • Cluster monitoring, maintenance and troubleshooting.
  • Handled the importing of data from various data sources, performed transformations using hive, Map - Reduce, loaded data into HDFS and extracted data from MySQL into HDFS using Sqoop.
  • Analyzed the data by performing Hive queries (HiveQL) and running Pig Scripts (Pig Latin)
  • Installed Oozie workflow engine to run multiple Hive and Pig jobs.
  • Experience with ETL by using the Business Object tool.
  • Exported the analyzed data to the relational databases using Sqoop for visualization and to generate reports for the BI team.
  • Experienced on loading and transforming of large sets of structured, semi structured and unstructured data.
  • Worked on NoSQL database including HBase.
  • Analyzed large amounts of data sets to determine optimal way to aggregate and report on it.

Environment: Hadoop v1.0.1, HDFS, MapReduce, Hive, Sqoop, Pig, DB2, Oracle, XML, CDH3.

Sr. Hadoop Developer

Confidential, Edenton, NC

Responsibilities:

  • Worked closely with the business analysts to convert the Business Requirements into Technical Requirements and prepared low and high-level documentation.
  • Hands on experience on writing MR jobs for encryption and also for converting text data into Avro format.
  • Hands on experience in joining raw data with the data using Pig scripting.
  • Hands on experience in writing scripting for copying data between different clusters and also between different UNIX file systems.
  • Hands on experience in writing MR jobs for cleansing the data and to copy it to AWS cluster form our cluster.
  • Developed Spark SQL script for handling different data sets and verified its performance over MR jobs.
  • Connected Tableau from client end with AWS ip addresses and view the end results.
  • Developed Coordinator and Oozie workflows to automate the jobs.
  • Hands on experience in writing hive UDF's to handle different Avro schemas.
  • Experience with moving large datasets hourly with AVRO file format and imposing hive and impala queries.
  • Hands on experience in working with snappy compression and also different file formats.
  • Developed shell script to back up the name node Meta data.
  • Cloudera Manger was used to monitor the health of Jobs which are running on the cluster.

Environment: Hadoop, Map Reducer, Cloudera Manager, HDFS, Hive, Pig, Sqoop, Spark, Oozie, Impala, Greenplum, Kafka, SQL, Java (jdk 1.6), Eclipse.

Hadoop Developer

Confidential, MN

Responsibilities:

  • Working on complete life cycle of software development, which included new requirement gathering, redesigning and implementing the business specific functionalities, testing and assisted in deployment of the project to the PROD environment.
  • Worked as independent and Leading the team.
  • Involved in the Client Side development and the Server Side Development.
  • Involved in developing and leading new modules, enhancements and change requests.
  • Expertise in quickly analyzing production issues and coming up with resolutions
  • Participating in workshop meetings
  • Involved in configuration of Jenkins, Hudson & Sonar integration
  • Involved in SONAR KPI's & Build activities.
  • Extensive experience with Agile Team Development and Test Driven Development using JIRA.
  • Used Test Driven Development approach to implement the solutions, by writing test classes using Junit, Mocito & Power Mocito.
  • Used Behavior Driven Development approach to implement the functional solutions, by writing feature file and Cucumber is used to generate code.
  • Involved in implementation of different common components.
  • Applications are implemented by using different technologies
  • Followed Agile, CD process to implement the applications
  • Deployment support for test environments along with production environment.
  • Ensured timely delivery with quality of the product
  • Used Maven to build, run and create JARs and WAR files among other uses
  • Created web pages using Bootstrap, JavaScript, JQuery, Ajax and Angular JS.
  • Used extensively SQL and PL/SQL concepts.
  • Used SQL Loader to load and extract the details to DB
  • Development of GUI's using spring framework to follow the MVC architecture.
  • Batch Script is used to implement the solutions for automate the jobs
  • Used Hibernate, JDBC, JPA, Spring Data JPA, QueryDSL to connect to databases like Oracle and MYSQL to store, delete, manipulate and retrieved data from them in many of my applications.
  • Used JIRA to keep track of bugs, requirements and progress of the sprint
  • Used Jasmine and Karma for unit testing for AngularJS applications
  • Implemented the application specific Web services to transfer data in real-time by utilizing WSDL, REST and JMS technologies
  • Used Maven to build, run, release and create JARs and WAR files among other uses.

Environment: JSP, JavaScript, Ajax, CSS, Spring, Spring Data, Query DSL, JSF, JDBC, Hibernate, JPA, Web services, PL/SQL, Sql loader, Oracle, SVN, ANT, Maven, Jenkins, Junit, Power Mocito, Log4J, AspectJ, WebLogic, HTML5, CSS3, Bootstrap, AngularJS.

JAVA Developer

Confidential

Responsibilities:

  • Involved in various phases of Software Development Life Cycle (SDLC) such as requirements gathering, analysis, design and development.
  • Involved in overall performance improvement by modifying third party open source tools like FCK Editor.
  • Developed Controllers for request handling using spring framework.
  • Involved in Command controllers, handler mappings and View Resolvers.
  • Designed and developed application components and architectural proof of concepts using Java, EJB, JSP, JSF, Struts, and AJAX.
  • Participated in Enterprise Integration experience web services
  • Configured JMS, MQ, EJB and Hibernate on Web sphere and JBoss
  • Focused on Declarative transaction management
  • Developed XML files for mapping requests to controllers
  • Extensively used Java Collection framework and Exception handling.

Environment: Core Java, J2EE5, Spring, JSP, XML, Spring, JSP, Servlets, Hibernate Criteria API, JSF, JSF Rich Faces, Java Swing, Web service, WSDL, XML, Glassfish, UML, EJB, Java script, JQuery, Hibernate, SQL, CVS, Agile, JUnit.

Java Developer

Confidential

Responsibilities:

  • Responsible for gathering the requirements doing the analysis and formulating the requirements specifications with the consistent inputs/requirements.
  • Participated in analysis, design and development of e-bill payment system as well as account transfer system and developed specs that include Use Cases, Class Diagrams, Sequence Diagrams and Activity Diagrams.
  • Involved in designing the user interfaces using JSPs.
  • Developed the application using Struts Framework that leverages classical Model ViewLayer (MVC) architecture.
  • Responsible for Documenting Status Reports in Payment Transaction Module.
  • Implemented payment transaction module for the customers by developing all the components using Java, JSP, Hibernate, Struts and spring environments.
  • Resolved technical issues reported by Client.
  • Used MyEclipse for writing code for JSP, Servlets, and Struts.
  • Developed Unix shell (ksh) scripts to automate most of the engineering and testing.
  • Developed business layer components using Enterprise Java Beans (EJB).
  • Used JDBC to invoke Stored Procedures and database connectivity to ORACLE.

Environment: Java, J2EE, JSP, Struts, EJB, Oracle, ANT, MyEclipse, UNIX, UNIX shell Scripts, Apache Tomcat.

We'd love your feedback!