We provide IT Staff Augmentation Services!

Hadoop Developer/admin Resume

0/5 (Submit Your Rating)

Birmingham, MI

SUMMARY:

  • Around 8 years of IT experience and technical proficiency as a Big Data Developer and Administrator, Front - End User Interface Developer, designing and developing user interface and professional web applications. I have excellent abilities to analyze, diagnose and resolve complex programming and system problems independently. I have excellent communication, interpersonal, analytical skills.
  • Experience in installation, configuration, support and management of a Hadoop Cluster.
  • Worked on Big Data warehousing projects in designing and developing history migration and daily process.
  • Experience in architecting, designing, installation, configuration and management of Apache Hadoop Clusters & Cloudera Hadoop Distribution.
  • Technical expertise in Big data/Hadoop HDFS, Map Reduce, Spark, HIVE, PIG, Sqoop, Flume, Oozie, NoSQL Data bases HBase, Cassandra, SQL, Unix Scripting
  • Excellent programming skills with experience in Java, C, SQL and Python Programming.
  • Having good experience in applying the latest software development approaches including MVC, event - driven applications using AJAX, Object Oriented (OO) JavaScript, JSON and XML
  • Experienced in interacting with Clients, Business Analysts, IT leads, UAT Users and developers.
  • Experienced in SDLC, Agile (SCRUM) Methodology, Iterative Waterfall
  • Experience in architecting, designing, installation, configuration and management of Apache Hadoop Clusters & Cloudera Hadoop Distribution
  • Experience in importing and exporting data using Sqoop from HDFS to Relational Database Systems and vice-versa.
  • Experience working with extraction and loading big volume of data into/from MS SQL Server, DB2, Oracle and Teradata tables.
  • Hands on experience in application development using Java, RDBMS, and Linux shell scripting.
  • Experienced in configuring the job scheduling in Linux using shell scripts and Crontab.
  • Experience in Working with Core Java, J2EE Based Applications.
  • Experience in implementing Apache Axis and Apache CXF frameworks to build and consume Soap Based Web Services and RESTful Web Services
  • Developed Test Plans, Test Scenarios, Test Cases, Test Procedures, Test Reports and documenting test results accordingly after analyzing Business Requirements Documents (BRD), Functional Requirement Specifications (FRS).

TECHNICAL SKILLS:

Big Data Technologies: HDFS Hive Pig Sqoop Oozie Map Reduce Zookeeper Hbase Cassandra Spark Flume Strom Kafka

Programming Languages: Java C/C++ SQL Python PL/SQL JSP Servlets JDBC Spring Hibernate HTML4/5 CSS2/3 JavaScript JQuery AJAX JSON XML Unix Shell Scripting

Software/Tools/Editors: Eclipse IntelliJ IDE NetBeans CVS SVN Git

Databases: Oracle MYSQL MS SQL Server Cassandra Hbase

Web/App Servers: Apache Tomcat 7.0 HTTP Web Server

Operating System: Windows Ubuntu

PROFESSIONAL EXPERIENCE:

Confidential, Birmingham, MI

Hadoop Developer/Admin

Responsibilities:

  • My responsibility involves in setting up the Hortonworks/Cloudera Hadoop cluster for the project.
  • Maintaining the cluster securely using Kerberos and making the cluster upend running all the time also troubleshooting if any problem persists.
  • Handled importing of data from various data sources, performed transformations using Hive, Map Reduce, loaded data into HDFS and Extracted the data from MySQL into HDFS using Sqoop.
  • Exported the analyzed data to the relational databases using Sqoop for visualization and to generate reports for the BI team.
  • Developed Simple to complex Map/reduce Jobs.
  • Analyzed the data by performing Hive queries and running Pig scripts to know user behavior.
  • Developed Simple to complex Map/reduce Jobs using Hive.
  • Written spark programs in Scala and ran spark jobs.
  • Created partitioned tables in Hive.
  • Worked on Installed and configured Hadoop 0.22.0 Map Reduce, HDFS, developed multiple Map Reduce jobs in java for data cleaning and preprocessing.
  • Importing and exporting data into HDFS and HIVE using Sqoop.
  • Converting Hive Queries to Spark SQL and using parquet file as the storage format.
  • Implemented Spark RDD transformations, actions to migrate Map reduce algorithms.
  • Responsible to manage data coming from different sources
  • Monitoring the running Map Reduce programs on the cluster.
  • Responsible for loading data from UNIX file systems to HDFS.
  • Installed and configured Hive and also wrote Hive UDFs.
  • Deployed the Hadoop application on cloud AWS (Amazon Web Sevices).
  • After the transformation of data is done, this transformed data is then moved to Spark cluster where the data is set to go live on to the application using Spark streaming and kafka.
  • Created RDD’s in Spark technology.
  • Extracting data from data warehouse on to the Spark RDD’s
  • Architecture and implementation of the Product Platform as well as all data transfer, storage and Processing from Data Center and to Hadoop File Systems
  • Involved in creating Hive Tables, loading with data and writing Hive queries which will invoke and run Map Reduce jobs in the backend.
  • Used Python for pattern matching in build logs to format errors and warnings.
  • Built wrapper shell scripts to hold these Oozie workflow
  • Implemented the workflows using Apache Oozie framework to automate tasks.
  • Involved in writing Unix/Linux Shell Scripting for scheduling jobs and for writing pig scripts and hive QL
  • Developed scripts and automated data management from end to end and sync up b/w all the clusters.
  • Worked closely with data warehouse architect and business intelligence analyst to develop solutions

Environment: Apache Hadoop, Java (jdk1.6), Python,Shell Scripting,Data tax, Flat files, Oracle 11g/10g, MySql, Toad 9.6, Windows NT, UNIX, AWS,Sqoop, Hive, Oozie.

Confidential, Lowell, MA

Hadoop Developer

Responsibilities:

  • Analyzed large data sets by running Hive queries and Pig scripts
  • Involved in creating Hive tables, and loading and analyzing data using hive queries
  • Developed Simple to complex Map Reduce Jobs using Hive and Pig
  • Involved in runningHadoopjobs for processing millions of records of text data
  • Load and transform large sets of structured, semi structured and unstructured data
  • Responsible to manage data coming from different sources
  • Implemented Partitioning, Dynamic Partitions, Buckets in HIVE.
  • Monitor System health and logs and respond accordingly to any warning or failure conditions.
  • Architecture and implementation of the Product Platform as well as all data transfer, storage and Processing from Data Center and to Hadoop File Systems
  • Implemented the workflows using Apache Oozie framework to automate tasks.
  • Worked with application teams to install operating system,Hadoopupdates, patches, version upgrades as required
  • Worked closely with data warehouse architect and business intelligence analyst to develop solutions
  • Developed multiple Map Reduce jobs in java for data cleaning and preprocessing
  • Involved in loading data from LINUX file system to HDFS
  • Experienced in runningHadoopstreaming jobs to process terabytes of xml format data.
  • Load and transform large sets of structured, semi structured and unstructured data.
  • Assisted in exporting analyzed data to relational databases using Sqoop
  • Supported Map Reduce Programs those are running on the cluster
  • Created and maintained Technical documentation for launchingHADOOPClusters and for executing Hive queries and Pig Scripts

Environment: Hadoop, HDFS, Pig, Hive, Map Reduce, Sqoop, Oozie, and Big Data,Apache Java (jdk1.6), Data tax, Flat files, Oracle 11g/10g, MySQL, Toad, Windows NT, LINUX

Confidential, Dearborn, MI

Senior Software Engineer

Responsibilities:

  • Design and development of technical specifications using design patterns and OO methodology
  • Developed web application using Struts Framework.
  • Developed user interfaces using JSP, HTML and CSS.
  • Used Eclipse as IDE tool to develop the application. Created Web.xml, Struts-config.xml, Validation.xml files to integrate all the components in the Struts framework.
  • Worked heavily with the Struts tags- used struts as the front controller to the web application.
  • Implemented Apache Axis and Apache CXF frameworks to build and consume Soap Based Web Services and RESTful Web Services
  • Implemented Struts Framework according to MVC design pattern.
  • Used Struts framework to generate Forms and actions for validating the user request data.
  • Developed Server side validation checks using Struts validators and Java Script validations.
  • With JSP’s and Struts custom tags, developed and implemented validations of data.
  • Developed applications which access the database with JDBC to execute queries, prepared statements, and procedures.
  • Developed programs to manipulate the data and perform CRUD operations on request to the database.
  • Coding of SQL, PL/SQL, and Views using IBM DB2 for the database.
  • Working on issues while converting JAVA to AJAX.
  • Developed DAO’s and Entities by using the JPA reverse engineering.
  • Supported in developing business tier using the stateless session bean.
  • Using the GWT to build screens and make remote procedure calls to middleware.

Environment: Windows XP, Java/J2ee, Struts, JUNIT, Java, Servlets, JavaScript, SQL, HTML, XML, Eclipse.

Confidential

Senior Software Engineer

Responsibilities:

  • Handling Complete for Resource Maintenance, Bill Generation Module Designing, Coding and Supporting.
  • Involved in discussing with Business Analyst for requirements gathering.
  • Wrote SQL scripts to create and maintain the database, roles, users, tables, views, procedures and triggers in Oracle
  • Designed and implemented the UI using HTML, JSP, JavaScript and Java.
  • Implemented Multi-threading functionality using Java Threading API
  • Extensively worked on IBM Web Sphere 6.0 while implementing the project.
  • Developed the UI screens using HTML5, DHTML, XML, Java Scripts, Ajax, JQuery custom- tags, JSTL DOM Layout and CSS3.
  • Building skills in the following technologies: WebLogic, Spring Batch, Spring, Java.
  • Used Junit, Easy mock framework for unit testing of application and implemented Test Driven Development (TDD) methodology.
  • Used Oracle as backend database using Windows OS. Involved in development of Stored Procedures, Functions and Triggers.
  • Involved in unit & integration testing to find and fix the bugs.

Environment: s: SQL, HTML, JSP, JavaScript, java, IBM Web Sphere 6.0, DHTML, XML, Java Scripts, Ajax, JQuery custom-tags, JSTL DOM Layout and CSS3.

Confidential

Java/Technical Support Engineer

Responsibilities:

  • Extensively worked in acquiring the requirements from the business analysts and involved in all requirement clarification calls.
  • Understanding the design documents.
  • Involved in Detail level design and coding activities at offshore.
  • Involved in Code review.
  • Writing and testing the JUNIT test classes.
  • Provide support to client applications in production and other environments.
  • Working on tickets raised by the real time users and continuous interaction with end users.
  • Prepared the Technical Design Document, understanding document and test cases (UTCs and ITCs).
  • Experience in designing and coding web applications using Core Java and J2EE Technologies- JSP JDBC, Jenkins and github.
  • Provided Technical & Functional support to the end users during UAT & Production.
  • Continuous monitoring of application for 100% availability.

Environment: Java, Spring MVC, MIMA ORM, CVS, AQT, WebSphere, Oracle 10g and HPSM Ticketing tool.

We'd love your feedback!