We provide IT Staff Augmentation Services!

Big Data / Hadoop Developer Resume

0/5 (Submit Your Rating)

Cleveland, OhiO

SUMMARY

  • Experience in Banking & finance Industry, Automotive Engineering, Software services and Lawn equipment domains.
  • Over 5+ years of IT experience in analysis, design and development using Hadoop, Java and J2EE.
  • 3.5 years of experience with Big data, Hadoop, HDFS, Map Reduce and Hadoop Ecosystem (Pig & Hive).
  • Have hands on experience in writing MapReduce jobs using Java.
  • Hands on experience in writing pig Latin scripts and pig commands.
  • Hands on experience in installing, configuring and using ecosystem components like Hadoop MapReduce, HDFS, Sqoop, Pig & Hive
  • Experience in database development using SQL and PL/SQL and experience working on databases like Oracle 9i/10g, Informix and SQL Server.
  • Experience with all aspects of development from initial implementation and requirement discovery, through release, enhancement and support (SDLC & Agile techniques)
  • Expert in performing Data Analysis, Gap Analysis, Co - ordinate with the business, Requirement gathering and technical documents preparation.
  • Experience working on NoSQL databases including Cassandra, MongoDB and Hbase.
  • Experience using Sqoop to import data into HDFS from RDBMS and vice-versa.
  • Effective team player and excellent communication skills with insight to determine priorities, schedule work and meet critical deadlines.
  • Good Knowledge in HTML, CSS, JavaScript and web based applications.
  • Strong technical and architectural knowledge in solution development.
  • Effective in working independently and collaboratively in teams.
  • Good analytical, communication, problem solving and interpersonal skills.
  • Flexible and ready to take on new challenges.
  • Result Oriented and target driven.

TECHNICAL SKILLS

Big Data Ecosystem: Hadoop, Map Reduce, HDFS, HBase, Zookeeper, Hive, Pig, SqoopOozie, Flume and Impala.

Programming: Java, C, Python, SAS and PL/SQL

Database: Oracle 10g, DB2, SQL, No sql (MongoDB)

Server: WebSphere Application Server 7.0, Apache Tomcat 5.x 6.0, Jboss 4.0

Web: HTML, Java Script, XML, XSL, XSLT, XPath, DOM, XQuery

Analytics: Tableau, SPSS, SAS EM and SAS JMP

Scripts: Bash, Python, ANT

OS & Others: Windows, Linux, IBM Http Server, SVN, Clear Case, Putty, PostMan, SOAP UI, WinScp and FileZilla

PROFESSIONAL EXPERIENCE

Confidential, Cleveland, Ohio

Big Data / Hadoop Developer

Responsibilities:

  • Have setup the 32-node cluster and configured the entire Hadoop platform.
  • Maintained System integrity of all sub-components (primarily HDFS, MR, HBase, and Hive).
  • Integrated the hive warehouse with HBase
  • Migrating the needed data from MySQL & Mongo DB into HDFS using Sqoop and importing various formats of flat files into HDFS.
  • Written customized HiveUDFs in Java where the functionality is too complex.
  • Maintain System integrity of all sub-components related to Hadoop.
  • Designed and created Hive external tables using shared meta-store instead of derby with partitioning, dynamic partitioning and buckets.
  • HiveQL scripts to create, load, and query tables in aHive.
  • Supported Map Reduce Programs those are running on the cluster
  • Monitored System health and logs and respond accordingly to any warning or failure conditions.
  • Generate final reporting data using Tableau for testing by connecting to the corresponding Hive tables using Hive ODBC connector.

Environment: Apache Hadoop, HDFS, Cloudera CDH4, Hive, Map Reduce, Java, Pig, Sqoop, MySQL, Tableau, Elastic search, Talend, SFTP, Kibana.

Confidential, Michigan

Hadoop Developer

Responsibilities:

  • Develop JAVA MapReduce Jobs for the aggregation and interest matrix calculation for users.
  • Involved in creating Hive tables, loading with data and writing hive queries which will run internally in map reduce way
  • Experienced in managing and reviewing Hadoop log files
  • Create and maintain Hive warehouse for Hive analysis.
  • Generate test cases for the new MR jobs.
  • Run various Hive queries on the data dumps and generate aggregated datasets for downstream systems for further analysis.
  • Use Apache Scoop to dump the data user data into the HDFS on a weekly basis.
  • Run clustering and user recommendation agents on the weblogs and profiles of the users to generate the interest matrix.
  • Prepare the data for consumption by formatting it for upload to the UDB system.
  • Lead & Programmed the recommendation logic for various clustering and classification algorithms using JAVA.

Environment: HDFS, Shell scripting, Apache Pig, MapReduce, Java, Text Analytics, Hive,Sqoop

Confidential, MI

Sr.Java Developer

Responsibilities:

  • Responsible for requirement gathering and analysis through interaction with end users.
  • Involved in developing other subsystems’ server-side components
  • Worked on Maven build tool.
  • Involved in developing JSP pages using Struts custom tags, JQuery and Tiles Framework.
  • Used JavaScript to perform client side validations and Struts-Validator Framework for server-side validation.
  • Good experience in Mule development.
  • Developed Web applications with Rich Internet applications using Java applets, SilverLight, JavaFX.
  • Production supporting using IBM clear quest for fixing bugs.
  • Involved in creating Database SQL and PL/SQL queries and stored Procedures.
  • Debugged and developed applications using Rational Application Developer (RAD).
  • Developed a Web service to communicate with the database using SOAP.
  • Designed and developed the application using various design patterns, such as session facade, business delegate and service locator.
  • Deployed the components in to WebSphere Application server 7.
  • Actively involved in backend tuning SQL queries/DB script.
  • Worked in writing commands using UNIX, Shell scripting.
  • Developed DAO (data access objects) using Spring Framework 3.
  • Implemented Singleton classes for property loading and static data from DB.
  • Lead the team in designing use-case diagrams, class diagram, interaction using UML model with Rational Rose.

Environment: Java EE 6, IBM WebSphere Application Server 7, Eclipse 4.2,JSTL,DHTML, Apache-Struts 2.0, EJB 3, Oracle 11g/SQL, JUNIT 3.8, CVS 1.2, Spring 3.2, JSP 2.0, Struts-Validator, Struts-Tiles, WebServices, JQuery 1.7, Servlet 3.0, Tag Libraries, ANT 1.5, JDBC, Rational clear case

We'd love your feedback!