We provide IT Staff Augmentation Services!

Senior Hadoop Developer Resume

5.00/5 (Submit Your Rating)

Austin, TX

SUMMARY:

  • Having a total work experience of 7 years in Information Technology with skills in analysis, design, development, testing and deploying various software appl­­­­­­­ications, which include Web related and windows applications with emphasis on Object Oriented Programming.
  • 4 years of hands on experience in Hadoop Framework and its ecosystem like Map Reduce Programming, Spark, Spark SQL, Spark Streaming, Kafta, Hive, Pig, Sqoop and Oozie.­­
  • Hands on experience in Hadoop clusters like Hortonworks and Cloudera.
  • Experience in analyzing data using Hive QL, Pig Latin Spark Core and Spark SQL.
  • Hands on experience in installing, configuring and using ecosystem components like Kafka, Spark, HDFS, Sqoop, Pig and Hive.
  • Hands - on experience with "Productionalizing" Hadoop applications (such as administration, configuration management, monitoring, debugging, and performance tuning).
  • HORTONWORKS certified Hadoop developer.
  • Hands on experience in designing and coding web applications using Core Java and J2EE technologies and open-Source frameworks like Spring, Hibernate, Web Services etc.,
  • Excellent communication skills, interpersonal skills, problem solving skills a very good team player along with a can do attitude and ability to effectively communicate with all levels of the organization such as technical, management and customers.

TECHNICAL SKILLS:

Big Data: Hadoop 2.2, HDFS, Pig, Hive, Sqoop, Zookeeper, Tez, Tablue, NoSql-Hbase, Spark, Flume, Yarn.

Programming Languages: Java J2EE, Scala, SQL, PL/ SQL, HSQL, Shell Scripting.

Databases: Oracle 10g, Microsoft SQL Server, MySQL, DB2, RDBS, Mongo DB, Cassandra, H Base

Build Tools: Jenkins, Build Forge.

Web Application Servers: Apache Tomcat, Web Logic, Web Sphere.

Version Control: GIT, SVN, IBM RTC.

Methodologies: Agile, UML, Design Patterns, SDLC

Operating Systems: Mac OS X 10.9.5, Windows 2008/Vista/2003/XP/2000/NT, Linux.

PROFESSIONAL EXPERIENCE:

Confidential, Austin, TX

Senior Hadoop Developer

Responsibilities:

  • Developed data pipeline using Flume, Sqoop, HIVE and OOZIE to ingest customer behavioral data and financial histories into HDFS for analysis.
  • Used HIVE as ETL tool to do transformations, event joins and some pre-aggregations on data stored in HDFS to generate reports.
  • Worked with Oozie workflow engine to manage interdependent Hadoop jobs and to automate several types of Hadoop jobs.
  • Used Hive to analyze the partitioned and bucketed data and compute various metrics for reporting.
  • Responsible for implementing POC's to migrate iterative map reduce programs into Spark transformations using Spark and Scala.
  • Developed Spark code to using Scala and Spark-SQL for faster processing and testing.
  • Involved in converting Hive/SQL queries into Spark transformations using Spark RDDs and Scala.
  • Developed a data pipeline using Kafka, Spark and Hive to ingest, transform and analyzing customer behavioral data.
  • Exploring with the Spark improving the performance and optimization of the existing algorithms in Hadoop using Spark Context, Spark-SQL, Data Frame, Pair RDD's, Spark YARN.
  • Used Spark for interactive queries, processing of streaming data and integration with popular NoSQL database for huge volume of data.

Environment: Hadoop, Spark Core, Spark-SQL, Spark-Streaming, Kafka, HDFS, Hive, Java, Scala, HSQL, Pig, Oracle.

Confidential, Portland, OR

Hadoop Engineer

Responsibilities:

  • Responsible for building scalable distributed data solutions using Hadoop cluster environment with Hortonworks distribution.
  • Extensive experience in writing Pig scripts to analyze and transform raw data from several data sources into forming baseline data.
  • Solved performance issues in Hive, Pig scripts with understanding of Joins, Group and aggregation, and how does it translate to Map Reduce jobs.
  • Developed Oozie workflow for scheduling and orchestrating the ETL process.
  • Created Pig Latin scripts to sort, group, join and filter the enterprise wise data.
  • Involved in creating Hive tables, loading with data and writing hive queries that will run internally in map reduce way.
  • Used Hive to form an abstraction on top of structured data resides in HDFS and implemented Partitions, Dynamic Partitions, Buckets on HIVE tables.
  • Worked on performance analysis and improvements for Hive and Pig scripts at MapReduce job tuning level.
  • Developed UDF's in java for enhancing functionalities of Pig and Hive scripts.
  • Design technical solution for real-time analytics using Kafka and HBase.
  • Used Spark-Streaming APIs to perform necessary transformations and actions on the data got from Kafka.

Environment: Hadoop, Map Reduce, Hive, Pig, H-B ase, Sqoop, Flume, Scala, Spark, Oozie, Kafka, Linux, Java, Eclipse, HDFS, PIG, Java (JDK), Oracle.

Confidential, Los Angeles, CA

Java/ J2EE Developer

  • Implemented using Agile methodology with 3 week iterations in a sprint, and implemented the project using SOA architecture.
  • Developed application usingHibernate to implement and leverage the ORM framework
  • Used JUnit for all unit testing and integration testing
  • Created SOAP web services to allow communication between the applications.
  • Integratedspring framework for dependency injection, transaction management among different layers of the application
  • Used Spring MVC framework controllers for Controllers part of the MVC
  • Designed Use Case Diagrams, Class Diagrams and Sequence Diagrams using Microsoft Visio.
  • Designed User Interface based on MVC framework, which attains an apt coordination for Struts MVC, JSP, Servlets and Custom Tag Libraries.
  • Created Action Classes for Controller in Struts MVC Framework
  • Implemented the logging mechanism for the entire application using Log4j
  • Extensively used Oracle as the database for all the data related to both the web applications
  • Used Spring Security to handle authentication, data integrity and single-sign-on.
  • Implemented the project using Maven build tool

Environment: JSF, Spring, JNDI, JMS, MQ-Series, XSLT, SOAP WS, J2EE, Oracle-10g, PL/SQL, Web-logic, Tortoise SVN, Hibernate, JPA, J-unit.

Confidential, Boston, MA

Java/ J2EE Developer

Responsibilities:

  • Actively involved in meeting with stakeholders, and business analyst’s, gathering requirements, and leading an offshore team in meeting the deadlines.
  • Followed Agile Methodology (TDD, SCRUM) to satisfy the customers.
  • Developed J-unit test cases using J-unit 4.0 and Easy Mock.
  • Defined Stored Procedures to define complex processing that requires execution of several SQL statements.
  • Used MQ-Series messaging to provide communication mechanism between applications on Windows and Linus platform.
  • Implemented Stateless WS using REST to enables greater scalability so that the server does not have to maintain, update or communicate the session state.
  • Developed LDAP server configuration files to enable encryption support for password storage.
  • Implemented JSF Converters to handle formatting and Localization and configured the faces-config.xml to create a web application navigation rule.
  • Dealt with Python to compile the byte-code for building this application.
  • Used log4j for tracking errors and debugging the code.
  • Implemented Spring MVC architecture and increased modularity by allowing the separation of crosscutting concerns using Spring AOP.
  • Deployed application in the Web-logic Server environment using Maven build tool.
  • Involved in J unit testing and system testing and responsible for preparing test scripts for the system testing.

Environment: JSF, JMS, JPA/Hibernate, Spring, JDBC, REST, SVS, UML, Java Swing, EDI, Log4j, Oracle 10g, My SQL, WAS-8, MQ-Series, Python, JMS, Unix, Maven, J Unit, JNDI.

We'd love your feedback!