Ja Developer Resume
VA
SUMMARY
- 8 years of professional experience in Software Development and Requirement Analysis in Agile work environment with strong emphasis on Big Data, Hadoop ecosystem related Technologies.
- Experience in dealing with Apache Hadoop components like HDFS, MapReduce, HiveQL, HBase, Pig, Sqoop, Ozzie, Mahout, Python, Cassendra, MongoDB, Big Data and Big Data Analytics.
- Good Exposure on Apache Hadoop Map Reduce programming, PIG Scripting and Distributed Application and HDFS.
- Good Knowledge on Hadoop Cluster architecture and monitoring the cluster and SOLR/Lucene.
- Experienced in processing Big Data on the Apache Hadoopframework MapReduce programs.
- Experience in analyzing data using Pig Latin, HiveQL and HBase.
- Good Knowledge on Hadoop Cluster architecture and monitoring the cluster and SOLR/Lucene.
- Experience in Object Oriented Analysis, Design (OOAD) and development of software using UML Methodology, good knowledge of J2EE design patterns and Core Java design patterns.
- Strong expérience in J2EE Architecture, Use Case analysis and UML skills in building highly sophisticated systems.
- Familiar with Java virtual machine (JVM) and multi - threaded processing.
- Very good experience in complete project life cycle (design, development, testing and implementation) of Client Server and Web applications.
- Extensive experience working in Oracle, DB2, SQL Server, PL/SQL and My SQL database.
- Hands on experience in application development using Java, RDBMS, and UNIX shell scripting.
- Performed data analysis using MySQL, SQL Server Management Studio and Oracle.
- Expertise in creating Conceptual Data Models, Process/Data Flow Diagram, Use Case Diagrams and State Diagrams.
- Experience with web-based UI development using JQuery, UI, CSS, HTML, HTML5, XHTML and JavaScript.
- Strong analytical skills with ability to quickly understand clients business needs. Involved in meetings to gather information and requirements from the clients.
- Research-oriented, motivated, proactive, self-starter with strong technical, analytical and interpersonal skills.
PROFESSIONAL EXPERIENCE
Confidential, CO
Hadoop Developer
Responsibilities:
- Worked on analyzing Hadoop cluster using different big data analytic tools including Pig, Hive and Map Reduce.
- Collecting and aggregating large amounts of log data using Apache Flume and staging data in HDFS for further analysis.
- Worked on debugging, performance tuning of Hive & Pig Jobs.
- Created Hbase tables to store various data formats of PII data coming from different portfolios.
- Implemented test scripts to support test driven development and continuous integration.
- Worked on tuning the performance Pig queries.
- Involved in loading data from LINUX file system to HDFS.
- Importing and exporting data into HDFS and Hive using Sqoop.
- Experience working on processing unstructured data using Pig and Hive.
- Implemented Partitioning, Dynamic Partitions, Buckets in Hive.
- Experienced in running Hadoop streaming jobs to process terabytes of xml format data.
- Supported MapReduce Programs those are running on the cluster.
- Gained experience in managing and reviewing Hadoop log files.
- Involved in scheduling Oozie workflow engine to run multiple Hive and pig jobs.
- Developed Pig Latin scripts to extract data from the web server output files to load into HDFS.
- Extensively used Pig for data cleansing.
- Created and maintained Technical documentation for launching HADOOP Clusters and for executing Hive queries and Pig Scripts.
- Strong experience on Apache server configuration.
- Extensively worked with Kafka and storm.
- Exported the result set from HIVE to MySQL using Shell scripts.
- Develop HIVE queries for the analysts.
- Implemented SQL, PL/SQL Stored Procedures.
- Actively involved in code review and bug fixing for improving the performance.
- Developed screens using JSP, DHTML, CSS, AJAX, JavaScript, Struts, spring, Java and XML
Environment: Hadoop, HDFS, Pig, Hive, MapReduce, Sqoop, LINUX, Cloudera, Big Data, Java APIs, Java collection, SQL, AJAX.
Confidential, NJ
Hadoop Developer
Responsibilities:
- Worked with business partners to gather business requirements.
- Developed the application by using the Spring MVC framework.
- Created connection through JDBC and used JDBC statements to call stored procedures.
- Responsible for building scalable distributed data solutions using Hadoop.
- Developed Pig Latin scripts to extract the data from the web server output files to load into HDFS.
- Developed the Pig UDF’S to pre-process the data for analysis.
- Implemented multiple Map Reduce Jobs in java for data cleansing and pre-processing.
- Experienced in loading data from UNIX file system to HDFS.
- Developed job workflows in Oozie to automate the tasks of loading the data into HDFS.
- Responsible for creating Hive tables, loading data and writing Hive queries.
- Effectively involved in creating the partitioned tables in Hive.
- Handled importing of data from various data sources, performed transformations using Hive, MapReduce, loaded data into HDFS and extracted data from Teradata into HDFS using Sqoop.
- Worked extensively with Sqoop for importing metadata from Oracle.
- Configured Sqoop and developed scripts to extract data from SQL Server into HDFS.
- Expertise in exporting analyzed data to relational databases using Sqoop.
- Implemented Fair schedulers on the Job tracker to share the resources of the Cluster for the Map Reduce jobs given by the users.
- Cluster co-ordination services through ZooKeeper.
- Responsible for running Hadoop streaming jobs to process terabytes of XML Data.
- Gained experience in managing and reviewing Hadoop log files
Environment: Hadoop 1x, HDFS, Map Reduce, Hive 10.0, Pig, Sqoop, HBase, Shell Scripting, Oozie, Oracle 10g, SQL Server 2008, Ubuntu 13.04, Spring MVC, J2EE, Java 6.0, JDBC, Apache Tomcat.
Confidential, VA
Hadoop Developer
Responsibilities:
- Installed and configured Hadoop MapReduce, HDFS.
- Developed multiple MapReduce jobs in Java for data cleaning and preprocessing.
- Importing and exporting data into HDFS and Hive using Sqoop.
- Experienced in defining job flows.
- Experienced in managing and reviewing Hadoop log files.
- Experienced in running Hadoop streaming jobs to process terabytes of xml format data.
- Load and transform large sets of structured, semi structured and unstructured data.
- Responsible to manage data coming from different sources.
- Supported Map Reduce Programs those are running on the cluster.
- Involved in loading data from UNIX file system to HDFS.
- Installed and configured Hive and also written Hive UDFs.
- Involved in creating Hive tables, loading with data and writing hive queries which will run internally in map reduce way
- Gained very good business knowledge on health insurance, claim processing, fraud suspect identification, appeals process etc.
Environment: Hadoop, MapReduce, HDFS, Hive, Java (jdk1.6), Hadoop distribution of Horton Works, Cloudera, MapR, DataStax, IBM DataStage 8.1(Designer, Director, Administrator), Flat files, Oracle 11g/10g, PL/SQL, SQL PLUS, Toad 9.6, Windows NT, UNIX Shell Scripting, Autosys r11.0.
Confidential, VA
Java Developer
Responsibilities:
- Responsible for requirement gathering and analysis through interaction with end users.
- Involved in designing use-case diagrams, class diagram, interaction using UML model with Rational Rose.
- Designed and developed the application using various design patterns, such as session facade, business delegate and service locator.
- Worked on Maven build tool.
- Involved in developing JSP pages using Struts custom tags, JQuery and Tiles Framework.
- Used JavaScript to perform client side validations and Struts-Validator Framework for server-side validation.
- Good experience in Mule development.
- Developed Web applications with Rich Internet applications using Java applets, Silverlight, JavaFX.
- Involved in creating Database SQL and PL/SQL queries and stored Procedures.
- Implemented Singleton classes for property loading and static data from DB.
- Debugged and developed applications using Rational Application Developer (RAD).
- Developed a Web service to communicate with the database using SOAP.
- Developed DAO (data access objects) using Spring Framework 3.
- Deployed the components in to WebSphere Application server 7.
- Actively involved in backend tuning SQL queries/DB script.
- Worked in writing commands using UNIX Shell scripting.
- Involved in developing other subsystems’ server-side components.
- Production supporting using IBM clear quest for fixing bugs.
Environment: Java EE 6, IBM WebSphere Application Server 7, Apache-Struts 2.0, EJB 3, Spring 3.2, JSP 2.0, WebServices, JQuery 1.7, Servlet 3.0, Struts-Validator, Struts-Tiles, Tag Libraries, ANT 1.5, JDBC, Oracle 11g/SQL, JUNIT 3.8, CVS 1.2, Rational clear case,Eclipse 4.2,JSTL,DHTML.
Confidential, NJ
Java Developer
Responsibilities:
- Created Use case, Sequence diagrams, functional specifications and User Interface diagrams using Star UML.
- Involved in complete requirement analysis, design, coding and testing phases of the project.
- Participated in JAD meetings to gather the requirements and understand the End Users System.
- Developed user interfaces using JSP, HTML, XML and JavaScript.
- Generated XML Schemas and used XML Beans to parse XML files.
- Created Stored Procedures & Functions. Used JDBC to process database calls for DB2/AS400 and SQL Server databases.
- Developed the code which will create XML files and Flat files with the data retrieved from Databases and XML files.
- Created Data sources and Helper classes which will be utilized by all the interfaces to access the data and manipulate the data.
- Developed web application called iHUB (integration hub) to initiate all the interface processes using Struts Framework, JSP and HTML.
- Developed the interfaces using Eclipse 3.1.1 and JBoss 4.1. Involved in integrated testing, Bug fixing and in Production Support.
Environment: Java 1.3, Servlets, JSPs, Java Mail API, Java Script, HTML, MySQL 2.1, Swing, Java Web Server 2.0, JBoss 2.0, RMI, Rational Rose, Red Hat Linux 7.1