We provide IT Staff Augmentation Services!

Senior Hadoop Developer Resume

4.00/5 (Submit Your Rating)

Sunnyvale, CA

SUMMARY:

  • Over 9 years of software development experience in a variety of industries, which includes hands on experience in Big Data technologies
  • In depth understanding/knowledge of Hadoop Architecture and various components such as HDFS, Name node, Data node, Resource manager, Application master, Node manager, Job tracker, Task tracker etc.
  • Possess knowledge in Big Data Analysis tools like Hadoop, HDFS, MapReduce, Hive, Pig, Oozie, Sqoop, Hue, Zookeeper, Kafka, Spark Core, Spark SQL and Scala.
  • Acquainted with Cloudera, HortonWorks Data platform, Apache Hadoop distributions.
  • Experience in working with MapReduce programs using Apache Hadoop for working with Big Data
  • Excellent understanding of both Classic MapReduce, YARN and their applications in BigData Analytics
  • Developed Data validation framework using Mapreduce.
  • Expertise in writing Hadoop Jobs for analyzing data using Hive and Pig
  • Experience in importing and exporting data using Sqoop from HDFS to Relational Database Systems and vice - versa
  • Experience in Hadoop workflows scheduling and monitoring using oozie
  • Have a good experience working with NoSQL database such as Cassandra, its architecture.
  • Experience in Scala and Spark SQL, Spark Streaming, Spark Core, RDD transformations, Data Frames and actions for faster analyzing and processing of data.
  • Good experience in Core and Advanced JAVA technologies.
  • Experience in development and integration of Web applications using Servlets, JSP, Spring Framework, Web Services, XML, HTML, CSS, Javascript.
  • Experience in working with different relational databases like MySQL, Oracle, DB2 and Teradata.
  • Experience in development of Web Services using JAX-RS, JAX-WS.
  • Experience in scripting components development using with Python scripting language and Unix Shell Scripts.
  • Experience in build scripts using with ANT, Maven and SBT and Unix Shell Scripts, Python, Power-Shell Scripts for system management.
  • Experience on various domains like Banking, Telecom and Agricultural.
  • Strong experience and understanding of software development methodologies such as Waterfall and Agile methodologies (Scrum Methodology).
  • Handled several techno-functional responsibilities including estimates, identifying functional and technical gaps, requirements gathering, designing solutions, development, developing documentation, and production support
  • An individual with excellent interpersonal and communication skills, strong business acumen, creative problem solving skills, technical competency, ability to learn quickly new technologies, adapt to new environments, self-motivated, team-player spirit, and leadership skills

TECHNICAL SKILLS:

Operating Systems: Unix, Linux, Windows NT, Windows 95/98, Windows 2000 ProfessionalWindows

Hadoop Ecosystem & Tools: Apache Hadoop, HDFS, MapReduce, Yarn, Hive, Tez, Pig, Oozie, Sqoop, HueSpark core, Spark SQL, Zookeeper

Hadoop Distributions: Hortonworks and Cloudera Hadoop distributions

NoSQL Databases: Cassandra 2.2 (DataStax Distribution)

Languages: Java (JDK 1.4/1.5/1.6/1.7/1.8 ), Python, Scala

Messaging: JMS, HornetQ, Apache Kafka

Web Technologies: J2EE, Servlets 2.x/3.x, JSP 2.x, JDBC 2.0/3.0, RMI, XML, SOA, UML, MVCSpring, Struts, Java Script, HTML, CSS

Scripting: Unix Shell Scripting, Python

Frameworks: J2EE, JDBC, RMI, Struts 1.1/1.2, Spring Framework

RDBMS: ORACLE 8i/9i/10g, DB2, MySQL, Derby, TeraData, SQL

Web/Application Servers: WebSphere Application Server 5.1/6.0/7.0, JBoss, Apache Tomcat

IDE / Tools: RAD, RTC, Eclipse 3.1, Maven, MKS, Log4j, Quality Centre

Methodologies: Unified Modeling Language (UML), Agile Methodology (Scrum)

PROFESSIONAL EXPERIENCE:

Confidential, Sunnyvale, CA

Senior Hadoop Developer

Responsibilities:

  • Worked on Data ingestion from external RDBMS systems using Sqoop.
  • Performed data cleaning, integration, transformation, reduction by developing Map-Reduce jobs in java
  • Written Hive Jobs to process the data as per business requirements to analyze further by data analytics.
  • Involved in Design, develop Hive Data model, loading with data and writing Java UDFs for Hive.
  • Used Hive partitioning and bucketing to improve the performance and to maintain historical data in tables in modular fashion.
  • Used Hive joins such as Map joins, Reducer joins, SMB Joins to query data from different Hive tables in efficient way.
  • Written oozie workflows to design and schedule Hadoop batch jobs
  • Involved in application migration from Hadoop to Spark for the fast processing.
  • Worked on migrating Hive queries to Spark SQL scripts.
  • Worked on migrating Mapreduce Jobs to Spark Jobs using scala.
  • Processed Data using Spark RDD transformations and actions.
  • Worked on events triggering using Java messaging system implementations such as Kafka, HornetQ.
  • Developed scripting components using with Python scripting language.
  • Involved in configuring using Zookeeper distributed cluster coordination service
  • Involved in Monitoring application status using Resource Manager web console.
  • Involved in Installing, Configuring Hadoop Eco System using Hortonworks Distribution.

Environment: Java 1.7, Hortonworks Data Platform 2.2, Hadoop & YARN 2.6, Hive 0.14, Tez 0.5.2, Sqoop 1.4.5, Oozie 4.1.0, Kafka 0.8.1, Zookeeper 3.4.5, Spark 1.2.0, Oracle 11g, HornetQ, Shell scripting, Quality Control, SVN, Maven

Confidential, Florham Park, NJ

Hadoop Developer

Responsibilities:

  • Responsible for loading the data from BDW Oracle database, Teradata into HDFS using Sqoop.
  • Written Mapreduce, Hive Jobs to process the data as per business requirements to analyze further by data analytics.
  • Used Hive partitioning and bucketing to improve the performance and to maintain historical data in tables in modular fashion.
  • Involved in Design, develop Hive Data model, loading with data and writing Java UDF for Hive.
  • Developed Pig Latin scripts for transformations, event joins, filter.
  • Written oozie workflows to design and schedule Hadoop workflows.
  • Monitoring application status using Resource Manager web console.
  • Involved in Installing, Configuring Hadoop Eco System using Cloudera Distribution.
  • Involved in POCs on Cassandra NoSQL Database.
  • Involved in Cassandra Installs, configures, troubleshoots, monitors and upgrades.
  • Worked on designing Cassandra Data model such as keyspaces, column families.
  • Written CQL scripts to create, alter and drop keyspaces and column families.
  • Written CQL scripts to perform CRUD operations on Cassandra database.
  • Written Cassandra Batch statements using Cqlsh
  • Creating and maintaining Technical documentation for launching Cassandra Clusters and for executing CQL queries.
  • Perform analysis, resolve problems and monitor to proactively prevent problems from occurring;
  • Created and maintained Technical documentation for launching HADOOP Clusters and for executing Hive queries.
  • Troubleshoot complex development and production application problems and provide technical and production support on an on-call basis;

Environment: Java 1.7, Cloudera Hadoop Distribution (CDH) 5.2.6, Hadoop & YARN 2.5, Hive 0.13, Hue 3.6.0, Tez 0.4, Sqoop 1.4.5, Oozie 4.0.0, Pig 0.12, DataStax DevCenter 1.4, Apache Cassandra 2.2, Teradata, Shell scripting, Quality Control, Rational Team Concert, Maven

Confidential, New York, NY

Hadoop Developer

Responsibilities:

  • Involved in collecting the business requirements for the project.
  • Attended business meetings to gather the requirements and wrote functional requirements document based on it.
  • Participated in technical discussions and overall architecture as well as to communicate with the other integration teams.
  • Used Sqoop to import/Export the data from RDBMS system into HDFS system.
  • Written Mapreduce, Hive Jobs to process the data as per business requirements to analyze further by data analytics.
  • Used Hive partitioning and bucketing to improve the performance and to maintain historical data in tables in modular fashion.
  • Written oozie workflows to schedule batch jobs
  • Extensively worked on Unit and Integration testing
  • Actively involved in QA and Production bug fixes

Environment: Java 1.7, Hadoop & YARN 2.0, Hive 0.10, Sqoop, Oozie, Oracle, Shell scripting, Quality Control, MKS, Maven

Confidential

Java developer

Responsibilities:

  • Involved in walking through the functional requirements and estimating the effort for delivering these requirements.
  • Developed the Backend code, which has the business logic and interacts with the database.
  • Unit Testing and delivering a quality piece of code.
  • Perform Build and deployment activities by using Maven build.
  • Involved in Database schema design, developed stored procedures.

Environment: Java 1.6, Amdocs Smart Client Framework 8.0, Log 4j, Quality Control, MKS

Confidential, New York, NY

Java Developer

Responsibilities:

  • Involved in walking through the functional requirements and estimating the effort for delivering these requirements.
  • Developed the front-end UI screens and respective business logic for all the modules using Spring core and Spring MVC frameworks.
  • Unit Testing and delivering a quality piece of code.
  • Perform Build and deployment activities by using Maven build.
  • Involved in Database schema design, developed stored procedures.

Environment: Java 1.6, Spring MVC framework, Spring IoC, Servlets, JSP, JDBC, HTML, CSS, Java Script, Oracle, Shell scripting, Log4j, BugZilla, CVS, Ant Build

We'd love your feedback!